Compile-time configuration of installation directories - rust

The autotools are providing a suite of utilities allowing the developer to configure the source code during compilation specifically to ease packaging.
One of the capabilities is the definition of several installation directory variables set during compilation and which avoids making assumptions on where the application is to be installed.
For example:
Variable
Default
prefix
/usr/local
datarootdir
${prefix}/share
sysconfdir
${prefix}/etc
localstatedir
${prefix}/var
...
How to mimic this behavior in Rust at compile time? Is there any existing standard or best practices for such need?
I'd like to make use of system paths but without having them fully hard-coded and allowing the packager to customize part of it.

You can use the std::env or std::option_env! macros to get an environment variable at compile time. This can be used to set install variables as environment variables when building your program.
If you want to additionally set defaults conveniently, here's a macro:
/// Get an environment variable during compile time, else return a default
macro_rules! env_or {
($name:expr, $default:expr) => {
// This is needed because `Option.unwrap_or` is not a const fn:
// https://github.com/rust-lang/rust/issues/91930
if let Some(value) = option_env!($name) {
value
} else {
$default
}
};
}
You can use this as follows:
const PREFIX: &str = env_or!("PREFIX", "/usr/local");
To concatenate PREFIX and /share, you can use the concatcp macro from the const_format crate as follows:
const DATAROOTDIR: &str = const_format::concatcp!(PREFIX, "/share");

Related

How to deal with assets in rust?

I have several json files which contain objects that need to be exported from the module to be used(read only) in various places in the code base. Exporting a function that reads the files and parses them and invoking it every time the objects are needed seems very wasteful. In go I'd export a global variable and initialize it in init function. So how do I go about doing it in rust?
I guess you are using this for interface definitions between different system parts. This is a known and well understood problem and is usually solved with a build script, such as in the case of protobuf.
There is a very good tutorial about how to use the build script to generate files.
This is how this could look like in code.
(All files are relative to the crate root directory)
shared_data.json:
{
"example_data": 42
}
build.rs:
use std::{
env,
fs::File,
io::{Read, Write},
path::PathBuf,
};
fn main() {
// OUT_DIR is automatically set by cargo and contains the build directory path
let out_path = PathBuf::from(env::var("OUT_DIR").unwrap());
// The path of the input file
let data_path_in = "shared_data.json";
// The path in the build directory that should contain the generated file
let data_path_out = out_path.join("generated_shared_data.rs");
// Tell cargo to re-run the build script whenever the input file changes
println!("cargo:rerun-if-changed={data_path_in}");
// The actual conversion
let mut data_in = String::new();
File::open(data_path_in)
.unwrap()
.read_to_string(&mut data_in)
.unwrap();
{
let mut out_file = File::create(data_path_out).unwrap();
writeln!(
out_file,
"::lazy_static::lazy_static! {{ static ref SHARED_DATA: ::serde_json::Value = ::serde_json::json!({}); }}",
data_in
)
.unwrap();
}
}
main.rs:
include!(concat!(env!("OUT_DIR"), "/generated_shared_data.rs"));
fn main() {
let example_data = SHARED_DATA
.as_object()
.unwrap()
.get("example_data")
.unwrap()
.as_u64()
.unwrap();
println!("{}", example_data);
}
Output:
42
Note that this still uses lazy_static, because I didn't realize that the json!() macro isn't const.
One could of course adjust the build script to work without lazy_static, but that would probably involve writing a custom serializer that serializes the json code inside the build script into executable Rust code.
EDIT: After further research, I came to the conclusion that it's impossible to create serde_json::Values in a const fashion. So I don't think there is a way around lazy_static.
And if you are using lazy_static, you might as well skip the entire build.rs step and use include_str!() instead:
use lazy_static::lazy_static;
lazy_static! {
static ref SHARED_DATA: serde_json::Value =
serde_json::from_str(include_str!("../shared_data.json")).unwrap();
}
fn main() {
let example_data = SHARED_DATA
.as_object()
.unwrap()
.get("example_data")
.unwrap()
.as_u64()
.unwrap();
println!("{}", example_data);
}
However, this will result in a runtime error if the json code is broken. With the build.rs and the json!(), this will result in a compile-time error.
The general way of solving this problem in Rust is to read the assets in a single place (main(), for example), and then pass a reference to the assets as needed. This pattern plays nicely with Rust's borrow checker and initialization rules.
However, if you insist on using global variables:
When we apply Rust's initialization and borrow checker rules to global variables one can see how the compiler might have a hard time proving (in general) that all accesses to a global variable are safe. In order to use global variables the unsafe keyword might need to be used when accessing the variable, in which case you are simply asserting that the programmer is responsible for manually verifying that all accesses to the global variable happen in safe way. Idiomatic Rust tries to build safe abstractions to minimize how often programmers need to do this. I wouldn't consider the lazy_static! macro as a hack, it is a abstraction (and a very commonly used one) that transfers the responsibility from the programmer to the language to prove that the global access is safe.

how do i say that a feature is only available on a given platform

I know how to say that a dependancy is only needed on windows but how do I say (as a crate writer) that a feature is only available on windows.
I tried (based on the depends way)
[target.'cfg(windows)'.features]
windbg = []
but this doesn't work.
cargo build says
warning: unused manifest key: target.cfg(windows).features
and a client app using the crate fails saying that the feature doesn't exist
Currently Cargo is not able to specify feature's target platform, but you can add target_os to your code as an extra attribute to let compiler know that your feature will only be available on the target you set.
Let's say you have defined your feature like below.
#[cfg(feature = "windbg")]
mod windbg {
//...
}
You'll need to replace it with:
#[cfg(all(target_os = "windows", feature = "windbg"))]
mod windbg {
//...
}

How to use a crate only for a given platform?

I would like to use the nix crate in a project.
However, this project also has an acceptable alternative implementation for OSX and Windows, where I would like to use a different crate.
What is the current way of expressing that I only want nix in Linux platforms?
There's two steps you need to make a dependency completely target-specific.
First, you need to specify this in your Cargo.toml, like so:
[target.'cfg(target_os = "linux")'.dependencies]
nix = "0.5"
This will make Cargo only include the dependency when that configuration is active. However, this means that on non-Linux OS, you'll get a compile error for every spot you use nix in your code. To remedy this, annotate those usages with a cfg attribute, like so:
#[cfg(target_os = "linux")]
use nix::foo;
Of course that has rippling effects as now other code using those items fails to compile as the import, function, module or whatever doesn't exist on non-Linux. One common way to deal with that is to put all usages of nix into one function and use a no-op function on all other OSes. For example:
#[cfg(target_os = "linux")]
fn do_stuff() {
nix::do_something();
}
#[cfg(not(target_os = "linux"))]
fn do_stuff() {}
fn main() {
do_stuff();
}
With this, on all platforms, the function do_stuff exists and can be called. Of course, you have to decide for yourself what the function should do on non Linux.

Writing ENV variables to configure an npm module

I currently have a project in a loose ES6 module format and my database connection is hard coded. I am wanting to turn this into an npm module and am now facing the issue of how to best allow the end user to configure the code. My first attempt was to rewrite it as classes to be instantiated but it is making the use of the code more convoluted than before so am looking at alternatives. I am exploring my configuration options. It looks like writing to the process env would be the way but I am pondering potential issues, no-nos and other options I have not considered.
Is having the user write config to process env an acceptable method of configuring an npm module? It's a bit like a global write so am dealing with namespace considerations for one. I have also considered using package.json but that's not going to work for things like credentials. Likewise using an rc file is cumbersome. I have not found any docs on the proper methodology if any.
process.env['MY_COOL_MODULE_DB'] = ...
There are basically 5ish options as I see it:
hardcode - not an option
create a configured scope such as classes - what I have now and bleh
use a config such as node-config - not really a user friendly option for npm
store as globals/env. As suggested in comment I can wrap that process in an exported function and thereby ensure that I have a complex non collisive namespace while abstracting that from end user
Ask user to create some .rc file - I would if I was big time like AWS but not in this case.
I mention this npm use case but this really applies to the general challenge of configuring code that is exported as functions. I have use cases for classes but when the only need is creating a configured scope at the expense (in my case) of more complex code I am not sure its worth it.
Update I realize this is a bit of a discussion question but it's helped me wrap my brain around options. I think something like this:
// options.js
let options = {}
export function setOptions(o) { options = o }
export function getOptions(o) { return options }
Then have the user call setOptions() and call this getOptions internally. I realize that since Node requires the module just once that my options object will be kept configured as I pass it around.
NPM modules should IMO be agnostic as to where configuration is stored. That should be left up to the developer, and they may pick their favorite method (env vars, rc files, JSON files, whatever).
The configuration can be passed to your module in various ways. A common way is to export a function that takes an options object:
export default options => {
let db = database.connect(options.database);
...
}
From there, it really depends on what exactly your module provides. If it's just a bunch of loosely coupled functions, you can just return an object:
export default options => {
let db = database.connect(options.database);
return {
getUsers() { return db.getUsers() }
}
}
If you want to allow multiple versions of that object to exist simultaneously, you can use classes:
class MyClass {
constructor(options) {
...
}
...
}
export default options => {
return new MyClass(options)
}
Or export the entire class itself.
If the number of configuration options is limited (say 3 or less), you can also allow them to be passed as separate arguments, instead of passing an object.

How to import platform-specific struct?

I've got a struct in a file that begins with this line:
// +build windows
Therefore it will only be built on Windows. However, the part of the application that initializes everything needs to check if it is running on Windows and if so, create an instance of the struct. I have no idea how to do this without breaking things on other platforms.
For example, if the file contains a function newWindowsSpecificThing() and I compile on Linux, the function won't exist because it is defined in a file that isn't being compiled. (And, of course, this will produce an error.)
How do I work around this dilemma?
I think your solution would be to have some method on your struct which is used on all platforms. Look at how the dir_*.go files work for the os package. The func (file *File) readdirnames(n int) (names []string, err error) is available on all platforms by providing it in dir_plan9.go, dir_unix.go and dir_windows.go.
For your problem, I'd take the same approach but with some generic method that does internal work. In your application logic you'd call that function and in your file_unix.go file you'd define that function to do nothing (empty body).
Somewhere you clearly have a function that calls newWindowsSpecificThing(). That should be in a Windows-specific file. If it were, then it wouldn't matter that it isn't available. The fact that you have something "check if it is running on Windows" suggests a if runtime.GOOS == "windows" statement somewhere. Rather than have that, move the entire if into a function that is defined in a Windows-specific file. You'll also need to define that function in a !windows file, which is fine.
As an example from my code, I have a function:
func Setup() *config {
var cfg *config
// setup portable parts of cfg
return PlatformSpecificSetup(cfg)
}
I then have a file marked // +build windows that defines PlatformSpecificSetup() one way, and another marked // +build !windows that defines it another. I never have to check runtime.GOOS and I never have to deal with undefined data types. The config struct itself is defined in those files, so it can have different fields for each platform (as long as they agree enough for Setup()). If I were being more careful, I could create a struct like:
type config struct {
// independent stuff
plat *platformConfig
}
And then just define platformConfig in each platform file, but in practice I've found that more trouble than it's worth.

Resources