Does cargo install download binaries compiled on someone else's computer?
Can it be the case that such pre-builts are sometimes downloaded when executing cargo install?
The output of cargo install suggests that compiling takes place, but I am not sure if I can rely that cargo install will never download anything pre-compiled to my computer.
Thus, whenever I can I manually clone a repo and compile the binaries myself, .e.g.
git clone https://github.com/mitnk/cicada.git && cd cicada && cargo build --release && sudo mv target/release/cicada /usr/local/bin
instead of installing, e.g. cargo install -f cicada. I only do the former because I would like to avoid downloading
binaries compiled on someone else's copmputer? Another reason for this is that I prefer to compile the binaries with --release.
I am not quite sure that such optimization takes place when cargo install is executed.
cargo install downloads and compiles the crate locally, using the same mechanisms as when it builds a crate you have downloaded.
cargo install defaults to release builds. You have to use the --debug flag to build debug builds.
The build scripts of the crates may download pre-compiled binaries. For example, crates that are bindings to C libraries may download the library. But this is true even if you git clone the source as well.
Related
I would like to download all the dependencies of my cabal project to my project directory/repository and force cabal to never use ~/.cabal/ and never download any new dependency from the internet. The downloaded libraries should be system independent (not include local paths etc.). Is this possible?
The idea behind this is, to copy the project directory to another (offline) system where the same ghc is installed and it should work out of the box.
According to the cabal docs, you can set active-repositories: none in your cabal.project or cabal file to put cabal-install in offline mode.
Then, you could create a cabal project and try to make all the dependencies that don't come bundled with GHC itself local packages. Clone them in your project folder and add them to the packages: section of cabal.project.
The problem with the above is that running cabal clean would require re-compiling all packages afterwards.
As an alternative, create a local no-index package repository in your offline machine, make it the only available package repository using active-repositories:, and put the sdist tarballs of all your dependencies there.
That way the dependencies will only be compiled once.
How do I manipulate Cargo such that it is able to run a set of command or stuff before it starts building each package individually? I want to use Cargo to do something to the source code of each other dependency packages it receives before it builds those dependencies.
I expected there to be something like: cargo install stopwatch but could not find it in the docs.
Finding the package version and manually adding the package to .toml:
[dependencies]
stopwatch="0.0.6"
Does not feel automated enough. :)
No, there is no such thing built in to Cargo. There is only a cargo install subcommand which installs the binaries of a crate system-wide.
New third-party Cargo subcommands can be created, and cargo edit, does what you want.
These cargo subcommands can then be installed by cargo install, in a fun meta circle!
% cargo install cargo-edit
# Now `cargo add` is available
% cargo add mycrate
As of Rust 1.62.0, you can use the following command to add dependencies, avoiding you to open the .toml file.
cargo add dependency#version
More info here: https://doc.rust-lang.org/nightly/cargo/commands/cargo-add.html
i am on alpine:3.6 i've already installed zeromq binary from (compiled from source) on system.
and now i want to use nodejs's binding for this.
so using https://github.com/JustinTulloss/zeromq.node
here is some instruction here in which we can build library from source.
https://github.com/JustinTulloss/zeromq.node/wiki/Installation#installation-on-linux--bsd-without-root-access
we can complie lib on our own but that places binaries in same folder, but instead i want npm to use library which installed in system (/use/local)
as far as i can gess its game of these two lines which i don't much knowledge about it
export CXXFLAGS="-I $(readlink -f ../include)"
export LDFLAGS="-L $(readlink -f ../lib) -Wl,-rpath=$(readlink -f ../lib)"
then npm install will use libs what we just complied in zeromq folder
i don't have much knowledge of CXXFLAGS and LDFLAGS so is that possible.
The installation instructions you cite are for peopel without root access, which are therefore unable to install software in the "usual places" like /usr or /usr/local. If you install a library into an "unusual place" like your home directory, you have to tell the compiler and linker where the library can be found. Thats the use of CXXFLAGS and LDFLAGS in this case. Since you seem to have root access and installed the ZeroMQ library in an "usual place", npm install zmq should work without you having the set these varaibales.
Update: The above seems to be not working. However, according to https://github.com/JustinTulloss/zeromq.node#project-status, this module is deprecated anyway. The new zeromq model works without compilation for me. See this minimal Dockerfile:
FROM node:alpine
RUN npm install zeromq
Note: This does not use the pre-installed library but a pre-build one instead. However, you could leave out the pre-installed library from the base image to save size.
it is possible now, with 6-beta.4, just tested
npm install zeromq#6.0.0-beta.4 --zmq-shared
in a system with zeromq package is installed, i.e. in alpine
apk add zeromq-dev
I am almost sure that -dev package is not required, but I did not tested
UPDATE (few minute later)
It is possible to use zmq as external library, it requires anyway a compilation, but without downloading all package.
Give that the zeromq-dev is installed
apk add zeromq-dev
(in this case -dev is required), then
npm i zeromq-stable --zmq-external
will compile the nodejs module, but not the zeromq library itself, a very faster option, but still not getting the lightweight docker image you need
Lets MyLib be my local Haskell library. I can build it with cabal build and install it with cabal install, but can I use it in other projects without the installation step?
I'm developing several libraries and installing them after every change is not a good solution.
Let's say you have two entirely separate projects, one called my-library and another called my-project. And my-project depends on my-library.
The way to build my-library and make it available to other projects is cabal install my-library. Once that's done, any other project can use the library.
Now you're ready to build my-project using the command cabal install my-project. It will not rebuild or reinstall my-library, but it will link your project with the library.
Now, if you make modifications to my-library, be sure to update the version number before running cabal install my-library. If you forget to bump the version number, you will be warned that my-project will be made obsolete. Now the old version and the new version of your library are available to other projects.
You can continue to run your projects. They will happily continue to use the previous version of my-library until you do another cabal install my-project. So there is no need to re-install everything after every change.
If you do want to rebuild your projects, but continue to work with an older version of your library, you can specify that in the build-depends section of your cabal file. For example, if you have versions 1.0 and 2.0 of my-library installed, you can build your project against the older version like this:
build-depends: my-library==1.0, ...
There isn't a great solution to your problem, but you can use sandboxes to keep your development environment a bit cleaner.
With cabal-1.18 or newer, you can create a sandbox with cabal sandbox init and then you can either install to that sandbox or add-source (cabal sandbox add-source <path to library>).
This helps to keep unstable libraries (and their potentially unstable dependencies) out of your user package database, and that can help prevent 'cabal hell' (unsolvable conflicts between dependencies). However, that doesn't directly help reduce the number of commands you need to issue each time you want to do a top-level build.
What you can do though, is set up a simple script that performs the add-source commands and builds your top-level package. eg:
#!/bin/bash
cabal sandbox init # a no-op if the sandbox exists.
cabal sandbox add-source ../MyLib
cabal install --dependencies-only
cabal build
Granted, you could do that before, but this time you can also easily clean up (removing all the installed artifacts) by cleaning the sandbox:
cabal sandbox delete