There is a 3rd party C library that I'd like to link to in my Rust project. It is hosted on github and compiles only as a static library. Is there any way to have Cargo fetch this dependency for me? I'm thinking there isn't. I tried adding it as a dependency and got a "Could not find Cargo.toml in ..." error.
As an alternative, I thought of modifying my build.rs file to use the git2-rs crate to download a tag of the library, possibly specified as a tag name passed through an environment variable.
Another option would be to include the source of the C library in my project, but I was thinking if the users of my crate want to use a different (but compatible) version of the 3rd party library with my crate, they wouldn't be able to do so as easily.
So how are others in the community handling situations like this?
In general, you want to create a libfoo-sys crate. That crate will have a build script that compiles the native library and sets up the linker options.
The build script can use build-time dependencies like the cc crate to make the process of downloading and compiling the native library easier.
You can use environment variables or features to choose where the native library comes from. You could use one already installed by the user by their system package manager (or perhaps a hand-compiled version), you could download the source from somewhere, you could include the code in the repository, or you could use a git submodule to reference another git repository instead of actually copying code.
In many cases, you will also use a tool like rust-bindgen to create the "raw" Rust bindings for the C library.
Related
As a Rust driver crate developer, I would like to perform below steps during my crate installation/download when used by any other Rust program:
Check the platform i.e. Windows or UNIX or macOS.
Download the corresponding platform-specific binary from an external website.
Set an environment variable pointing to the download location.
I know this is possible in Node or Python or R but not sure if this is possible in Rust.
You can use Build script to achieve that (but it is not what you should do, please see note below).
The script will be compiled and executed before cargo start building your library.
Inside the script you can use cfg attribute to check platform.
There is a bunch of libraries to download something via HTTP, for example reqwest
You can set environment variable via cargo:rustc-env=VAR=VALUE
IMPORTANT NOTE
Most of Rust users doesn't expect that kind of behavior from build script. There may be dozen of problems with the approach. Just few of them from the top of my head:
First of all there may be security issues.
The approach will break builds at client side.
I believe it's better to upload all binaries you need as a part of the crate. You can use include_bytes! for that.
I've created a nimble library package as per the documentation. When I try to build it using nimble build I get the following error.
Error: Nothing to build. Did you specify a module to build using the bin key in your .nimble file?
I can do this and it does fix the error but according to the documentation adding the bin key to the .nimble file turns my package into a binary package.
Other things I have tried:
Use nimble install: This does not appear to verify that my code will actually compile and will happily install anything to the local package directory (I added a C# class to my .nim file, for example, and it was successfully installed).
Use nimble c: This works but I have to pass in the path to the nim file I want to compile and the binDir entry in the .nimble file is ignored resulting in the output being placed in the same directory as the file being built. This complicates the development cycle because I have to manually clean up after the compiler.
Use the compiler directly. This is pretty much the same as the previous option with the same flaws.
I guess I could also create a separate .nim file and import my library after it is installed but this is a big overhead for just wanting to verify that a package in the early stages of development will actually compile.
I just want to be able to verify that the source code in my library package is syntactically correct and will compile. How is this meant to be done for library packages?
From your provided link to the nimble package manager documentation I have the feeling that
https://github.com/nim-lang/nimble#tests
is what you are looking for. But I have never used the test command, so I am not sure. I do my test manually still, I read the nimble docs maybe 4 years ago and can not really remember. And currently there is much package manager related work going on, I heard there is a new, alternative package manager called nimph, and from a forum thread I think I read something that nimble is going to change and improve also. Maybe you should consider subscribing to the Nim forum, that is the place where the bright Nim devs are. Well, at least a few of them.
I'm trying to write a small web application fully in Haskell. I have 3 logical packages:
A backend, using servant
A frontend, using reflex, reflex-dom and servant-reflex
A shared package defining the Servant API for communication between the 2 and some data types for that API to use.
That last package is giving me trouble. I don't know how to structure the project so the other 2 packages can use it. I see 2 options at the moment:
Each package has its own stack file and git repository. Import the shared package using an extra-deps git link. The problem with this approach is it means I have to push any change to the shared package to GitHub before I can test it out with the other packages. Also I'd have to build everything separately.
Use a single repository with a single stack.yml file. I'd prefer this, since it keeps everything together and also assures all packages are using the same resolver. In this case I would list all the packages in the packages: option. However, the client needs to be compiled with GHCJS, not GHC, and I don't see an option in the documentation to override the compiler for 1 specific package.
Is there a way to make option 2 work? Or is there a better way to do this?
The recommended approach is to have two stack project files (e.g. stack-frontend.yaml using GHCJS and stack-backend.yaml using GHC), and then use the --stack-yaml argument to switch between them (e.g. use stack --stack-yaml=stack-frontend.yaml build to build the frontend, and stack --stack-yaml=stack-backend.yaml build to build the backend). Both stack-*.yaml files can include the shared servant API.
There may not be a good answer for this question, but I have code that I would like to share between two different Rust projects WITHOUT publishing the crate to crates.io.
The code is proprietary and I don't want to put it out into the wild.
but it's proprietary code and I don't want to put it out into the wild.
You don't have to publish a crate. Specifically, just create the crate (cargo new shared_stuff) then specify the path to the common crate(s) in the dependent project's Cargo.toml:
[dependency.shared_stuff]
path = "path/to/shared/crate"
The Cargo documentation has an entire section on types of dependencies:
Specifying dependencies from crates.io
Specifying dependencies from git repositories
Specifying path dependencies
I believe that Cargo will allow you to fetch from a private git repository (such as on Github or another privately hosted service, such as GitLab), but I haven't tried that personally. Based on my searching, you will need to have previously authenticated or otherwise configured git to not require an interactive password entry.
It's theoretically possible to create your own crate registry. I've not even attempted to do this, but the machinery is present in Cargo to handle it.
I am working on bindings for a cpp library.
To do this I wrote a capi / wrapper for the library and compiled that to a shared lib (.so file).
My question is, how do I then use and integrate this file into cargo without forcing the user to install it? Currently I build the cpp via a Makefile called from the build variable in Cargo.toml, but I am unsure what to do with the compiled lib.
For testing, I can either use rpath or LD_LIBRARY_PATH to point the executable to the right location, but this will not work when distributing a library.
How are people managing this?
First of all, determine whether you really need a shared library. It's not clear from your question, but if you compiled your own wrapper into a shared library, that's probably unnecessary - you can compile your code into a static library and link it directly into your executable.
Moreover, you can try to link that third-party library statically too. I don't think this should be hard. And yes, you need to use build command in the manifest to do all of this now.
However, if you still need to use a shared library and you don't want the end user to install it herself (which is strange, because that's the point of shared libraries), you have to distribute it manually. For example, you can write a makefile which assembles an archive which your users may extract and use. For your program to find the library correctly you will either have the user to install this archive into the system root directory (e.g. /usr on linux; then this shared library will be located automatically) or you will have to write small shell script wrapper around your executable which will locate the shared library and set appropriate LD_LIBRARY_PATH.
I'd go for the first path. Usually all major platforms provide means to create installation packages (deb/rpm/pkg.tar.xz/whatever on Linux, brew on Mac, windows installer on Windows, though on Windows you can just put your shared library in the same directory as the executable and it will work). You just have to create packages for the platform your users work on, so your program will be installed in correct directories and your shared library will be resolved automatically.