How to build a project using Cargo in an offline environment? - linux

I have a laptop with an OS X system which can access the internet, and a Linux server which can not access internet for some security reason.
I want to build a Rust project with some dependency on the Linux server. Because the Linux server can not access internet, I run the cargo build command on the OSX laptop and download the dependency source file to the .cargo directory and then copy these files to the Linux server and put them into /root/.cargo directory.
I made the file structures the same, but when I run cargo build on the Linux server, it still tries to connect to this website and the build fails. The cargo build command always tries to connect internet although the dep source files are already in the .cargo directory.
How can I build a Rust project with dependencies in an offline environment? Why does copying the source file of the dependencies not work?

Good News! As of Rust 1.37, you can use Cargo's own vendor command to download and bundle your crate's dependencies in the crate itself:
First, run cargo vendor. This will setup a new directory named vendor in the root of your crate. It will then download dependencies from crates.io and git, and store them in this new directory.
When cargo vendor is done downloading all the required dependencies, it will print a set of instructions that you'll need to follow. At the time of this writing, you only need to copy a few lines to .cargo/config.toml. Note that config.toml is relative to the root of your crate and is not the one in your home directory.
Once you're done, your crate will be completely self-contained as far as dependencies are concerned. You can couple this approach with Rust's offline installers to build Rust programs completely offline.

For Rust 1.37+ see: https://stackoverflow.com/a/65254702/147192
The short answer is: up to 1.37 (excluded), it's complicated.
The long answer is that cargo will attempt to connect to github in order to check the index of the crates that the Cargo.toml file references.
I recommend you to check the cargo-vendor prototype to solve this issue (by aturon, a member of the Rust tooling subteam), and otherwise you could look at how some people created a mirror for crates.io in order to avoid the dependency on Internet.
There is a demand for Rust builds not to require Internet, and people working on it, however there is no blessed solution for now.

Check out romt - Rust Offline Mirror Tool.
Romt (Rust Offline Mirror Tool) aids in using the Rust programming language in an offline context. Instructions and tooling are provided for:
Mirroring of Rust ecosystem artifacts:
Toolchains (Rustc, Cargo, libraries, etc.)
Rustup (toolchain multiplexer)
Crates.io (community-supplied Crates)
Incremental artifact downloading.
Incremental artifact transfer to offline network.
Artifact serving in offline context (offline computer, disconnected network).

I could replace official crates.io registry use by adding this
.cargo/config file in my project (under windows %USERPROFILE%\.cargo\config seems ignored) :
[source]
[source.mirror]
registry = "http://localhost:8185/auser/crates.io-index.git"
[source.crates-io]
replace-with = "mirror"
Also works using a file based git registry clone:
registry = "file://c:/github/crates.io-index.git"
downloaded either using git clone --bare or --mirror
cargo build now print
Updating 'c:\github\crates.io-index.git' index
instead of Updating crates.io index

Related

Should end user utilities/applications be registered on crates.io?

Is it acceptable to register generally useful (utilities / applications) on crates.io?
The FAQ doesn't address this and from browsing, there are examples of end-user applications (mostly command line tools).
Or is crates.io? meant only for libraries?
I'm asking this because the documentation hints at library use, semantic versioning for API's etc. but doesn't reference the case for packaging applications explicitly.
Yes, because you can use cargo install to install and manage those applications system-wide. If this use were discouraged, I would suspect that command to not exist at all, or at least have a very limited applicability.
Snippet from cargo install --help:
Usage:
cargo install [options] [<crate>]
cargo install [options] --list
[...]
This command manages Cargo's local set of installed binary crates.
Only packages which have [[bin]] targets can be installed, and all
binaries are installed into the installation root's bin folder. The
installation root is determined, in order of precedence, by --root,
$CARGO_INSTALL_ROOT, the install.root configuration key, and
finally the home directory (which is either $CARGO_HOME if set or
$HOME/.cargo by default).
There are multiple sources from which a crate can be installed. The
default location is crates.io but the --git and --path flags can
change this source. If the source contains more than one package (such
as crates.io or a git repository with multiple crates) the <crate>
argument is required to indicate which crate should be installed.
This should not be the primary reason to publish an application to crates.io, but I'm listing it here because it's still a good reason. :)
The Rust team will occasionally use a tool called crater to check for regressions on all crates published on crates.io, usually before merging a pull request that has uncertain consequences. If you wrote some code that happens to compile today but would stop compiling1 due to a bug fix in the compiler, then they may even submit a pull request to your project that fixes your code!
1 Usually, when such breaking changes occur, there's at least one prior release in which a warning will be reported before the warning is turned into an error.

Right way to set up Rust using Vim

I've finally started to dive into Rust and want to clarify some issues when it comes to setting up everything nicely.
I'm using vim on Linux and found a nice plugin for syntax highlighting. Autocompletion is somewhat troublesome though, using phildawes/racer.
The plugin needs the src location for Rust which is in fact not that big of a deal, if I would know where said directory was (I only found the binaries and libs when using the suggested curl <...> | sh install). The sources are downloadable separately, although I didn't find an install for Rust that sets up the sources in let's say e.g. /usr/local/src/rust only the binaries and libs.
Second I've looked through the Cargo docs and and didn't find anything to where the extern dependencies are cloned into (wouldn't this be the source directory?)
Also should the Rust sources be updated setting up everything manually is kind of lame?
Is the quintessence to clone the Rust repository and build it yourself?
The plugin needs setting up the src location for rust which is in fact not that big of a deal, if I would know where said directory was
I couldn't find the sources either. If you just want the sources without all the history:
For 1.0.0,
git clone --depth=1 --branch 1.0.0 --single-branch https://github.com/rust-lang/rust/
or for nightly
git clone --depth=1 --single-branch https://github.com/rust-lang/rust/
Second I've looked through the cargo docs and and didn't find anything to where the extern dependencies are cloned into (wouldn't this be the source directory?)
In the standard installation, there's a directory .cargo in your home directory, which contains git/checkouts for the cloned crates.
You should probably try multirust though, which allows you to easily manage multiple Rust installations in ~/.multirust.
With multirust, your crate checkouts might be in e.g. ~/.multirust/toolchains/nightly/cargo/git/checkouts, not ~/.cargo/git/checkout.
Is the quint essence to clone the rust repository and build it yourself?
No, that's luckily not necessary anymore, unless you're working on the compiler/stdlibs, or trying to cross-compile. With multirust, updating is reduced to multirust update or multirust update nightly, etc.

What is the intended/planned way of configuring/installing software that uses Rust Cargo as build system?

Existing build systems usually have some kind of install targets, that is used either manually (for installing in /usr/local or other location that user can access) or automatically (by package build systems of binary based distros or by package managers of source based ones).
What is the intended way of installing software that uses Cargo? How an analog of make install should look like?
Cargo itself uses additional configure/make stuff that handles configuration, detection of system dependencies, running cargo build and installation.
Is this the right way for any other software built with Cargo? It means are there plans to cover this tasks by Cargo itself or is Cargo intended only as a tool for dependency fetching and compilation without any configuration/detection of installed deps/installation?
Or are any plans to add this functionality?
cargo install
As of Rust 1.5 you can use cargo install to install binary crates onto your system. You can install crates from:
crates.io (the default), using cargo install crate_name
any git repository, using cargo install --git repository_url
any directory, using cargo install --path /path/to/crate
The first two have additional options you can specify:
With crates.io, you can use --vers to specify the crate version.
With git repositories, you can use --branch to set the branch to install from, --tag to specify the tagged release to use, and --rev to build from a specific commit.
Installation location:
cargo install can be configured to install in a custom directory through the following methods, in order of precedence (highest first):
By passing --root /path/to/directory (this path can be relative)
By setting the $CARGO_INSTALL_ROOT environment variable
By setting the install.root configuration key
By setting the $CARGO_HOME environment variable (which will affect more than the installation directory of cargo install)
If none of the above are present, cargo will install the crates in ~/.cargo/bin.
In all of the above cases, the output files will actually be placed in the bin subdirectory (e.g. --root /path/to/directory will actually place ouput in /path/to/directory/bin).
Uninstallation
cargo uninstall can be used to remove previously-installed crates. If you have multiple crates with the same name installed, you can specify --root to only remove the version in that directory.
Example: I want to use rustfmt:
I can use the version on crates.io:
cargo install rustfmt
I like using original releases:
cargo install rustfmt --vers 0.0.1
I want it installed in /opt/rust_crates:
cargo install rustfmt --root /opt/rust_crates
I really need to use the bleeding-edge version:
cargo install --git https://github.com/rust-lang-nursery/rustfmt.git
The latest commit has a bug in it!
cargo install --git https://github.com/rust-lang-nursery/rustfmt.git --rev f5bd7b76e0185e8dd37ae6b1b5fb5e11187f0b8c
I truly desire the version that uses git submodules for its dependencies:
cargo install --git https://github.com/rust-lang-nursery/rustfmt.git --branch submods
I've cloned it and made some edits:
cargo install --path ~/my_rustfmt
Actually, I insist on doing my formatting entirely manually:
cargo uninstall rustfmt
(This answer is intended for developers who want to distribute their own programs, not users who have received a Cargo project and need to install it on their own system; That's the domain of Toby's answer).
In addition to Toby's Answer:
Cargo's install feature is not meant to be the primary way to distribute Rust programs. It's designed to be used only for distribution to other Rust developers. There's several drawbacks to using this feature:
Cargo requires end-users to install the entire Rust toolchain first.
Users will have to build the program locally, which can be slow, especially in Rust.
There's no support for upgrading programs once they're installed (without an additional tool).
There's (currently) no way to include assets, such as documentation.
The packages will be installed only for the current user unless flags are passed to cargo install.
In other words, cargo install is the make && sudo make install of Cargo programs; It's not the ideal way to distribute a Rust program, unless it's intended primarily for Rust programmers.
So what is the correct way?
Let's look at the alternatives.
Manually distribute a tarball/zip
You can replicate the effects of cargo install by simply using cargo build --release. This will place a (mostly, see the drawbacks below) statically linked crate binary in target/release/crate_name, and this can be repackaged into a .tar.gz or .zip and given out to other users.
Pros:
Doesn't require users to install Rust or build the program themselves.
Allows developers to copy assets into the tarball/zip and distribute them along with the program itself.
Cons:
Installing a .tar.gz/.zip is nonstandard and generally not considered ideal for most users.
If the crate needs any system dependencies beyond libc, it will fail to load them with a difficult to understand error.
This requires a developer to manually build a package to release for each version and platform combination.
Use a CI service to build releases
It's possible to recreate any of these methods using a cloud-based CI service. For example, using Travis CI, you can use the Trust project to automatically deploy in much the same way that you would from a tarball, but automatically with only a tag being required.
Pros:
(All of the advantages of a tarball, plus)
The developers don't have to manually release the program, they just need to tag a release.
As a side effect, it's possible to build for every package the program supports at once.
Cons:
The process can be frustrating to debug if it doesn't work correctly, because there's limited control over the server.
The build process is tied to a service, which means releases can be missed if the service is down when they are released.
With Trust or similar tools, you're still ultimately distributing a .tar.gz/.zip, which means there's still inconvenience for users and a lack of system dependency management.
In addition to Travis, see Appveyor and GitHub Actions as possible build platforms.
Provide a package
This is considered the ideal method for many end users, and is the standard way to distribute any program, not just Cargo programs. This alleviates almost every issue with the tarball approach, though not without some problems of its own.
Pros:
Included in the system like any other program.
Can be submitted to Linux distribution repositories to allow programs to be installed in only one command.
Allows updating, removal, and asset inclusion.
Tracks system dependencies, which is especially helpful for GUI apps.
Cons:
By far the most complex of these options.
Requires building a package separately for every supported platform (this can be alleviated with CI, but it will be even more complex to setup this way.)
This approach is best handled with additional tools:
cargo-deb: Build a package for Debian and Ubuntu.
cargo-rpm: Build a package for Fedora, Red Hat, and CentOS.
cargo-aur: Build a package for Arch Linux.
cargo-wix: Make a Windows Installer package.
These usually are tools that are meant to be run by developers, to create the files that are used to generate packages. See their own documentation for more information.
Source: https://rust-cli.github.io/book/tutorial/packaging.html

Preferred way to build multiple projects with Jenkins, CMake and pkg-config?

I am developing two libraries A and B with B depending on A, both managed in their own Git repositories. The libraries are built with CMake and installed in standard UNIX directories. During installation a .pc file is also installed that is used by pkg-config. Library B uses pkg-config to find library A, therefore it is necessary that either library A is installed system-wide with make install or the PKG_CONFIG_PATH is set to the appropriate directory.
Now, I use Jenkins to build library A on a remote machine. Unfortunately, library B cannot be built because the dependency is not met (pkg-config cannot find library A). Setting the paths in a pre-build step is not working because the commands are run in its own shell.
The questions are
Can I somehow make install library A? Or,
can I somehow point CMake to /var/lib/jenkins/jobs/libA/install_dir/lib?
Is there a better way to build projects with inter-dependent libraries?
To answer your questions in order:
To make install library A - You can configure the Jenkins job that builds library A to archive the library as a build artefact. Then the job to build library B can download the artefact from Jenkins at the start of the run – e.g. http:///job/libA/lastSuccessfulBuild/artifact/
Once the library B job has collected library A it can then be installed and used.
Configuring Cmake – I don't know enough about cmake so I'm afraid I can't answer that.
Is there a better way – Possibly using Rake, we use it to control a build chain with lot's of dependencies. Although I'm not sure how well it would work if library A has to be built on a remote machine. Things might be simpler to manage if both libraries are build on the same machine.
Using artifacts, as suggested by user1013341, is one of the steps that was needed to this problem. But to get it working with pkg-config we have to do a little bit more:
I setup library A's CMakeLists.txt to produce a tarball with make package_source.
After a successful build of library A, Jenkins create this tarball and stores it as an artifact.
library B uses the Copy Artifact Plugin to get the tarball and untars it. Inside of the tarball there is still built project and the .pc file pointing to the install location of library A.
In the next build step, I use the EnvInject Plugin to set the PKG_CONFIG_PATH and the LD_LIBRARY_PATH to the untarred library A.
Last but not least, the normal CMake build process can be started and the correct paths are picked up according to the environment variables.

What are the files from the 'make' of git that I actually need to run git?

I'm trying to "portablize" git, so I want to send the required executables from the make process of git to my hosted web server. Can I do that? Do you think the executables will work?
The way I do it is to:
get all Git dependencies (as listed in this Solaris package site, but this works for any Unix platform)
compile those dependencies with --prefix=/home/myuser and install them in the usr/local/lib of my home directory
then compile Git (still avoiding any reference to a system path like /usr/local/lib, but only using the lib and include within my homedir)
and install Git in the /home/myuser/git directory
I can then copy only /home/myuser/git and /home/myuser/usr/local (and $prefix/libexec/git-core as Jakub mentions in the comments) to any other similar server, knowing it will work in isolation from any existing system libraries.

Resources