I have several json files which contain objects that need to be exported from the module to be used(read only) in various places in the code base. Exporting a function that reads the files and parses them and invoking it every time the objects are needed seems very wasteful. In go I'd export a global variable and initialize it in init function. So how do I go about doing it in rust?
I guess you are using this for interface definitions between different system parts. This is a known and well understood problem and is usually solved with a build script, such as in the case of protobuf.
There is a very good tutorial about how to use the build script to generate files.
This is how this could look like in code.
(All files are relative to the crate root directory)
shared_data.json:
{
"example_data": 42
}
build.rs:
use std::{
env,
fs::File,
io::{Read, Write},
path::PathBuf,
};
fn main() {
// OUT_DIR is automatically set by cargo and contains the build directory path
let out_path = PathBuf::from(env::var("OUT_DIR").unwrap());
// The path of the input file
let data_path_in = "shared_data.json";
// The path in the build directory that should contain the generated file
let data_path_out = out_path.join("generated_shared_data.rs");
// Tell cargo to re-run the build script whenever the input file changes
println!("cargo:rerun-if-changed={data_path_in}");
// The actual conversion
let mut data_in = String::new();
File::open(data_path_in)
.unwrap()
.read_to_string(&mut data_in)
.unwrap();
{
let mut out_file = File::create(data_path_out).unwrap();
writeln!(
out_file,
"::lazy_static::lazy_static! {{ static ref SHARED_DATA: ::serde_json::Value = ::serde_json::json!({}); }}",
data_in
)
.unwrap();
}
}
main.rs:
include!(concat!(env!("OUT_DIR"), "/generated_shared_data.rs"));
fn main() {
let example_data = SHARED_DATA
.as_object()
.unwrap()
.get("example_data")
.unwrap()
.as_u64()
.unwrap();
println!("{}", example_data);
}
Output:
42
Note that this still uses lazy_static, because I didn't realize that the json!() macro isn't const.
One could of course adjust the build script to work without lazy_static, but that would probably involve writing a custom serializer that serializes the json code inside the build script into executable Rust code.
EDIT: After further research, I came to the conclusion that it's impossible to create serde_json::Values in a const fashion. So I don't think there is a way around lazy_static.
And if you are using lazy_static, you might as well skip the entire build.rs step and use include_str!() instead:
use lazy_static::lazy_static;
lazy_static! {
static ref SHARED_DATA: serde_json::Value =
serde_json::from_str(include_str!("../shared_data.json")).unwrap();
}
fn main() {
let example_data = SHARED_DATA
.as_object()
.unwrap()
.get("example_data")
.unwrap()
.as_u64()
.unwrap();
println!("{}", example_data);
}
However, this will result in a runtime error if the json code is broken. With the build.rs and the json!(), this will result in a compile-time error.
The general way of solving this problem in Rust is to read the assets in a single place (main(), for example), and then pass a reference to the assets as needed. This pattern plays nicely with Rust's borrow checker and initialization rules.
However, if you insist on using global variables:
When we apply Rust's initialization and borrow checker rules to global variables one can see how the compiler might have a hard time proving (in general) that all accesses to a global variable are safe. In order to use global variables the unsafe keyword might need to be used when accessing the variable, in which case you are simply asserting that the programmer is responsible for manually verifying that all accesses to the global variable happen in safe way. Idiomatic Rust tries to build safe abstractions to minimize how often programmers need to do this. I wouldn't consider the lazy_static! macro as a hack, it is a abstraction (and a very commonly used one) that transfers the responsibility from the programmer to the language to prove that the global access is safe.
Related
I am using Rust with the Rodio crate and I wanted to make a Vec of loaded sources to use whenever they are needed, so that the program doesn't need to load it every time. I've made a SoundHandler class that contains an OutputStream, an OutputStreamHandle and a Vec<Decoder<BufReader<File>>>. The first two are instanced by using OutputStream::try_default(), which returns both in a tuple. The Vec is a vector of loaded sources. Its content are, as I understand it, loaded and then pushed in the following SoundHandler method:
pub fn load_source(&mut self, file_path: PathBuf) -> SoundId {
let id = self.sources.len();
self.sources.push(
Decoder::new(
BufReader::new(
File::open(file_path).expect("file not found")
)
).expect("could not load source")
);
SoundId(id)
}
Next, the source should be played by calling the play_sound() method.
Previously, it directly loaded and played the sound:
pub fn play_sound(file_path: PathBuf) {
self.stream_handle.play_raw(
Decoder::new(
BufReader::new(
File::open(file_path).expect("file not found")
)
).expect("could not load source")
);
}
But now it must handle the sources that come from the Vec. I had trouble while trying to do that. I need help with that. I am pretty new to the language so knowing how to solve that problem in more than one way would be even better, but my priority is really the path I've chosen to do it, because knowing how to solve that would undoubtedly increase my Rust problem solving skills. Keep in mind my main goal here is to learn the language, and the snippets below are not at all an urgent problem to be solved for a company.
Ok, so I tried the most straightforward thing possible. I directly changed the old method to do the same thing but with the Vec:
pub fn play_sound(&self, sound_id: SoundId) {
self.stream_handle
.play_raw(self.sources.get(sound_id.0).unwrap().convert_samples()).unwrap();
}
But the compiler won't compile that method, because, according to it,
error[E0507]: cannot move out of a shared reference
self.stream_handle.play_raw(self.sources.get(sound_id.0).unwrap().convert_samples()).unwrap();
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^-----------------
| |
| value moved due to this method call
move occurs because value has type `rodio::Decoder<std::io::BufReader<std::fs::File>>`, which does not implement the `Copy` trait
note: this function takes ownership of the receiver `self`, which moves value
I understand what the compiler is saying. Of course that can't be possible. convert_samples() tries to make the value inside the Vec invalid, and thus it could make the Vec unsafe. So I naively tried to clone the Decoder, but the struct does not implement the Clone trait, and apparently does not have any method to do so. Finally, I found a way to get the value and own it through the Vec::remove() method, but I do not want to remove it, and I do want it to be fallible.
So my question is: in this situation, how can I make that Vec of loaded sources? Maybe I could even make a Vec out of SamplesConverter. But then the name of the type gets huge, so that is probably not an intended way to do it: Vec<SamplesConverter<Decoder<BufReader<File>>, f32>>. But it has the same problems of the other implementation.
Decoders can't be cloned because the underlying data source may not be clonable (and in fact BufReader isn't). So in order to clone the audio sources you will need to ensure that the data is stored in memory. This can be accomplished by the buffered method, which returns a Buffered source, which can be cloned. So something like this should work:
pub fn load_source(&mut self, file_path: PathBuf) -> SoundId {
let id = self.sources.len();
self.sources.push(
Decoder::new(
BufReader::new(
File::open(file_path).expect("file not found")
)
).expect("could not load source")
.buffered()
);
SoundId(id)
}
pub fn play_sound(&self, sound_id: SoundId) {
self.stream_handle.play_raw(
self.sources.get(sound_id.0).unwrap().clone().convert_samples()).unwrap();
}
Recently, I started to learn rust. I'm currently at section 7.4 (bringing paths into scope). Tried hard, but I can't understand the purpose of self::some_sort_of_identifier in rust. Would you please explain what is the difference between use self::module_name::function_name and use module_name::function_name? I tried both and they both worked as expected in the example below:
mod my_mod {
pub fn func() {
print!("I'm here!");
}
}
use my_mod::func;
fn main() {
func();
}
Running this program, as expected, I can see this statement printed into the terminal:
I'm here
And this program here gives me exactly the same results and the rust compiler doesn't complain about anything:
mod my_mod {
pub fn func() {
print!("I'm here!");
}
}
use self::my_mod::func;
fn main() {
func();
}
So, is self:: useless in rust? Why should I even use self::my_mod::my_function(); when I can directly call it like so: my_mod::my_function();.
Are there any cases in which they might defer?
For your use-case, it's mainly a relict from the 2015 rust edition.
In this edition the following code would not compile:
use my_mod::func;
mod my_mod {
use my_mod2::func2;
pub fn func() {
func2();
}
mod my_mod2 {
pub fn func2() {
print!("I'm here!");
}
}
}
fn main() {
func();
}
The compiler complains:
error[E0432]: unresolved import my_mod2
--> src\main.rs:4:9
|
| use my_mod2::func2;
| ^^^^^^^ help: a similar path exists: self::my_mod2
Why did it change? You can see the note about the path and module system changes here.
Rust 2018 simplifies and unifies path handling compared to Rust 2015. In Rust
2015, paths work differently in use declarations than they do
elsewhere. In particular, paths in use declarations would always start
from the crate root, while paths in other code implicitly started from
the current scope. Those differences didn't have any effect in the
top-level module, which meant that everything would seem
straightforward until working on a project large enough to have
submodules.
In Rust 2018, paths in use declarations and in other code work the
same way, both in the top-level module and in any submodule. You can
use a relative path from the current scope, a path starting from an
external crate name, or a path starting with crate, super, or self.
The blog post Anchored and Uniform Paths from the language team also underlines this
The uniformity is a really big advantage, and the specific feature we’re changing -
no longer having to use self:: - is something I know is a big
stumbling block for new users and a big annoyance for advanced users
(I’m always having to edit and recompile because I tried to import
from a submodule without using self).
The keyword itself however is still useful in use statments to refer to the current module in the path itself. Like
use std::io::{self, Read};
being the same as
use std::io;
use std::io::Read;
I have a small Rust project, and would like the simplicity of a flat module system, combined with the modularity of separate files. In essence, I would like this program:
src/main.rs
fn print_hello_world() {
println!("Hello, world!");
}
fn main() {
print_hello_world();
}
to be structured akin to this:
src/print_hello_world.rs
pub fn print_hello_world() {
println!("Hello, world!");
}
src/main.rs
use crate::print_hello_world; // Error: no `print_hello_world` in the root
fn main() {
print_hello_world();
}
Note that there is only a single module in this pseudo project; the crate root module. No child modules exists. Is it possible to structure my project like this? If yes, how?
Solution attempts
It is easy to get this file structure by having this:
src/main.rs
mod print_hello_world;
use print_hello_world::print_hello_world;
fn main() {
print_hello_world();
}
but this does not give me the desired module structure since it introduces a child module.
I have tried finding the answer to this question myself by studying various authorative texts on this matter, most notably relevant parts of
The Rust Book
The Rust Reference
The rustc book
My conclusion so far is that what I am asking is not possible; that there is no way to use code from other files without introducing child modules. But there is always the hope that I am missing something.
This is possible, but unidiomatic and not recommended, since it results in bad code isolation. Making changes in one file is likely to introduce compilation errors in other files. If you still want to do it, you can use the include!() macro, like this:
src/main.rs
include!("print_hello_world.rs");
fn main() {
print_hello_world();
}
I'm working on a Rust executable project (audio codec + driver) that needs to compile to ARM Cortex-M, as well as ARM64 and X86 Linux platforms. We would like to have the project structure such that target specific code can be put in separate target specific files and directories, while target independent code kept in only one place. For example, we would like the top level main.rs to conditionally include arm64/main.rs, thumb/main.rs, and x86/main.rs files, where each target specific main.rs file is likely to do vastly different things.
My questions are:
What's the best way to do such a thing? Usually, in C/C++ this is trivial, but I don't know how to do this using cfg! macros.
Is there a way to specify these things directly in Cargo.toml instead of having them in Rust source files?
Any help, or pointers to projects that do this sort of thing is highly appreciated.
You can use the #[cfg(...)] attribute on anything, including functions, modules, and individual blocks of code.
The simplest way would be to create several main functions in one file, where each has a #[cfg(...)] attribute on it. The reference has a big list of configuration options you can use.
#[cfg(target_arch = "x86_64")]
fn main() {
println!("Hello from x86_64");
}
#[cfg(target_arch = "arm")]
fn main() {
println!("Hello from ARM");
}
But you asked about multiple files. In Rust, each file is a module, and you can attach the same cfg attribute to the module declaration. For example, you might have two arch-specific files:
// main_x86_64.rs
pub fn main() {
println!("Hello from x86_64");
}
// main_arm.rs
pub fn main() {
println!("Hello from ARM");
}
Then, in the real main function, conditionally compile one of them in and conditionally call one main function or the other.
// main.rs
#[cfg(target_arch = "x86_64")]
mod main_x86_64;
#[cfg(target_arch = "arm")]
mod main_arm;
#[cfg(target_arch = "x86_64")]
fn main() {
main_x86_64::main();
}
#[cfg(target_arch = "arm")]
fn main() {
main_arm::main();
}
If you're looking for a broader pattern, I'd point you to the way core organizes platform-specific things. There's an arch module that conditionally compiles in each platform's submodule on just that platform. You can see in the source code that there's a cfg attribute for each submodule so that it only gets compiled in on that platform.
I am trying to use the flate2 and tar crates to iterate over the entries of a .tar.gz file, but am getting type errors, and I'm not sure why.
Here is my code (and yes, I know I shouldn't use .unwrap() everywhere, this is just POC code):
extern crate flate2; // version 0.2.11
extern crate tar; // version 0.3
use std::io::Read;
use std::fs::File;
use flate2::read::GzDecoder;
use tar::Archive;
fn main() {
let file = File::open("/path/to/tarball.tar.gz").unwrap();
let mut decompressed = GzDecoder::new(file).unwrap();
let unarchived = Archive::new(decompressed);
let entries_iter = unarchived.entries_mut();
}
This gives me the error error: no method named 'entries_mut' found for type 'tar::Archive<flate2::gz::DecoderReader<std::fs::File>>' in the current scope.
GzDecoder::new is returning a DecoderReader<R>, which implements Read as long as R implements Read, which File does, so that should be fine. Archive<O> has different methods depending on what kind of traits O implements, but in this case I am trying to use .entries_mut(), which only requires O to implement Read.
Obviously I am missing something here, could someone help shed some light on this?
Oh man, this is tricky. The published documentation and the code do not match. In tar version 0.3.2, the method is called files_mut:
extern crate flate2; // version 0.2.11
extern crate tar; // version 0.3
use std::fs::File;
use flate2::read::GzDecoder;
use tar::Archive;
fn main() {
let file = File::open("/path/to/tarball.tar.gz").unwrap();
let decompressed = GzDecoder::new(file).unwrap();
let mut unarchived = Archive::new(decompressed);
let _files_iter = unarchived.files_mut();
}
This commit changed the API.
This is a subtle but prevalent problem with self-hosted Rust documentation at the moment (my own crates have the same issue). We build the documentation on every push to the master branch, but people use stable releases. Sometimes these go out of sync.
The best thing you can do is to run cargo doc or cargo doc --open on your local project. This will build a set of documentation for the crates and versions you are using.
Turns out that the published documentation of tar-rs was for a different version than what was on crates.io, so I had to change .entries_mut to .files_mut, and let files = to let mut files =.