I am trying to use the flate2 and tar crates to iterate over the entries of a .tar.gz file, but am getting type errors, and I'm not sure why.
Here is my code (and yes, I know I shouldn't use .unwrap() everywhere, this is just POC code):
extern crate flate2; // version 0.2.11
extern crate tar; // version 0.3
use std::io::Read;
use std::fs::File;
use flate2::read::GzDecoder;
use tar::Archive;
fn main() {
let file = File::open("/path/to/tarball.tar.gz").unwrap();
let mut decompressed = GzDecoder::new(file).unwrap();
let unarchived = Archive::new(decompressed);
let entries_iter = unarchived.entries_mut();
}
This gives me the error error: no method named 'entries_mut' found for type 'tar::Archive<flate2::gz::DecoderReader<std::fs::File>>' in the current scope.
GzDecoder::new is returning a DecoderReader<R>, which implements Read as long as R implements Read, which File does, so that should be fine. Archive<O> has different methods depending on what kind of traits O implements, but in this case I am trying to use .entries_mut(), which only requires O to implement Read.
Obviously I am missing something here, could someone help shed some light on this?
Oh man, this is tricky. The published documentation and the code do not match. In tar version 0.3.2, the method is called files_mut:
extern crate flate2; // version 0.2.11
extern crate tar; // version 0.3
use std::fs::File;
use flate2::read::GzDecoder;
use tar::Archive;
fn main() {
let file = File::open("/path/to/tarball.tar.gz").unwrap();
let decompressed = GzDecoder::new(file).unwrap();
let mut unarchived = Archive::new(decompressed);
let _files_iter = unarchived.files_mut();
}
This commit changed the API.
This is a subtle but prevalent problem with self-hosted Rust documentation at the moment (my own crates have the same issue). We build the documentation on every push to the master branch, but people use stable releases. Sometimes these go out of sync.
The best thing you can do is to run cargo doc or cargo doc --open on your local project. This will build a set of documentation for the crates and versions you are using.
Turns out that the published documentation of tar-rs was for a different version than what was on crates.io, so I had to change .entries_mut to .files_mut, and let files = to let mut files =.
Related
I have several json files which contain objects that need to be exported from the module to be used(read only) in various places in the code base. Exporting a function that reads the files and parses them and invoking it every time the objects are needed seems very wasteful. In go I'd export a global variable and initialize it in init function. So how do I go about doing it in rust?
I guess you are using this for interface definitions between different system parts. This is a known and well understood problem and is usually solved with a build script, such as in the case of protobuf.
There is a very good tutorial about how to use the build script to generate files.
This is how this could look like in code.
(All files are relative to the crate root directory)
shared_data.json:
{
"example_data": 42
}
build.rs:
use std::{
env,
fs::File,
io::{Read, Write},
path::PathBuf,
};
fn main() {
// OUT_DIR is automatically set by cargo and contains the build directory path
let out_path = PathBuf::from(env::var("OUT_DIR").unwrap());
// The path of the input file
let data_path_in = "shared_data.json";
// The path in the build directory that should contain the generated file
let data_path_out = out_path.join("generated_shared_data.rs");
// Tell cargo to re-run the build script whenever the input file changes
println!("cargo:rerun-if-changed={data_path_in}");
// The actual conversion
let mut data_in = String::new();
File::open(data_path_in)
.unwrap()
.read_to_string(&mut data_in)
.unwrap();
{
let mut out_file = File::create(data_path_out).unwrap();
writeln!(
out_file,
"::lazy_static::lazy_static! {{ static ref SHARED_DATA: ::serde_json::Value = ::serde_json::json!({}); }}",
data_in
)
.unwrap();
}
}
main.rs:
include!(concat!(env!("OUT_DIR"), "/generated_shared_data.rs"));
fn main() {
let example_data = SHARED_DATA
.as_object()
.unwrap()
.get("example_data")
.unwrap()
.as_u64()
.unwrap();
println!("{}", example_data);
}
Output:
42
Note that this still uses lazy_static, because I didn't realize that the json!() macro isn't const.
One could of course adjust the build script to work without lazy_static, but that would probably involve writing a custom serializer that serializes the json code inside the build script into executable Rust code.
EDIT: After further research, I came to the conclusion that it's impossible to create serde_json::Values in a const fashion. So I don't think there is a way around lazy_static.
And if you are using lazy_static, you might as well skip the entire build.rs step and use include_str!() instead:
use lazy_static::lazy_static;
lazy_static! {
static ref SHARED_DATA: serde_json::Value =
serde_json::from_str(include_str!("../shared_data.json")).unwrap();
}
fn main() {
let example_data = SHARED_DATA
.as_object()
.unwrap()
.get("example_data")
.unwrap()
.as_u64()
.unwrap();
println!("{}", example_data);
}
However, this will result in a runtime error if the json code is broken. With the build.rs and the json!(), this will result in a compile-time error.
The general way of solving this problem in Rust is to read the assets in a single place (main(), for example), and then pass a reference to the assets as needed. This pattern plays nicely with Rust's borrow checker and initialization rules.
However, if you insist on using global variables:
When we apply Rust's initialization and borrow checker rules to global variables one can see how the compiler might have a hard time proving (in general) that all accesses to a global variable are safe. In order to use global variables the unsafe keyword might need to be used when accessing the variable, in which case you are simply asserting that the programmer is responsible for manually verifying that all accesses to the global variable happen in safe way. Idiomatic Rust tries to build safe abstractions to minimize how often programmers need to do this. I wouldn't consider the lazy_static! macro as a hack, it is a abstraction (and a very commonly used one) that transfers the responsibility from the programmer to the language to prove that the global access is safe.
Recently, I started to learn rust. I'm currently at section 7.4 (bringing paths into scope). Tried hard, but I can't understand the purpose of self::some_sort_of_identifier in rust. Would you please explain what is the difference between use self::module_name::function_name and use module_name::function_name? I tried both and they both worked as expected in the example below:
mod my_mod {
pub fn func() {
print!("I'm here!");
}
}
use my_mod::func;
fn main() {
func();
}
Running this program, as expected, I can see this statement printed into the terminal:
I'm here
And this program here gives me exactly the same results and the rust compiler doesn't complain about anything:
mod my_mod {
pub fn func() {
print!("I'm here!");
}
}
use self::my_mod::func;
fn main() {
func();
}
So, is self:: useless in rust? Why should I even use self::my_mod::my_function(); when I can directly call it like so: my_mod::my_function();.
Are there any cases in which they might defer?
For your use-case, it's mainly a relict from the 2015 rust edition.
In this edition the following code would not compile:
use my_mod::func;
mod my_mod {
use my_mod2::func2;
pub fn func() {
func2();
}
mod my_mod2 {
pub fn func2() {
print!("I'm here!");
}
}
}
fn main() {
func();
}
The compiler complains:
error[E0432]: unresolved import my_mod2
--> src\main.rs:4:9
|
| use my_mod2::func2;
| ^^^^^^^ help: a similar path exists: self::my_mod2
Why did it change? You can see the note about the path and module system changes here.
Rust 2018 simplifies and unifies path handling compared to Rust 2015. In Rust
2015, paths work differently in use declarations than they do
elsewhere. In particular, paths in use declarations would always start
from the crate root, while paths in other code implicitly started from
the current scope. Those differences didn't have any effect in the
top-level module, which meant that everything would seem
straightforward until working on a project large enough to have
submodules.
In Rust 2018, paths in use declarations and in other code work the
same way, both in the top-level module and in any submodule. You can
use a relative path from the current scope, a path starting from an
external crate name, or a path starting with crate, super, or self.
The blog post Anchored and Uniform Paths from the language team also underlines this
The uniformity is a really big advantage, and the specific feature we’re changing -
no longer having to use self:: - is something I know is a big
stumbling block for new users and a big annoyance for advanced users
(I’m always having to edit and recompile because I tried to import
from a submodule without using self).
The keyword itself however is still useful in use statments to refer to the current module in the path itself. Like
use std::io::{self, Read};
being the same as
use std::io;
use std::io::Read;
I'm currently trying to learn Rust (for embedded specifically), coming from a background of C for embedded systems and Python.
So far, I've been reading Rust Programming Language and Rust for Embedded, and read a few blog posts on the web.
I want my first project to be a simple "Blinky", where an LED blinks infinitely. I've got an STM32L152CDISCOVERY board with an STM32L152 chip in it (basically same as STM32L151), which is a Cortex M3.
Instead of implementing everything from scratch, I want to leverage existing crates and HALs. I've found two that seem promising: stm32l1 and stm32l1xx-hal. I've tried to read the documentation of each of them and also part of the source code, but I still can't figure out how to use them correctly.
Got a few questions about Rust and about the crates:
I see that stm32l1xx-hal has a dependency on stm32l1. Do I need to add both as a dependency in my Cargo.toml file? Or will that create problems related to ownership?
Is this the correct way to add them? Why is the second one added like that [dependencies.stm32l1]?
[dependencies]
cortex-m-rt = "0.6.10"
cortex-m-semihosting = "0.3.3"
panic-halt = "0.2.0"
stm32l1xx-hal = "0.1.0"
[dependencies.stm32l1]
version = "0.13.0"
features = ["stm32l151", "rt"]
To blink the LD4 (which is connected to PB4 PB6), I've got to enable the GPIOB in the RCC register and then configure the pin to push pull output. By inspecting the documentation of stm32l1xx-hal, I see that there is an RCC struct and a PB4 struct with the method into_push_pull_output. However, I still don't understand how to use these structs: how to import them or how to get an instance of them.
I've seen code examples for stm32l1 but not for stm32l1xx-hal. I know that I can do this:
use stm32l1::{stm32l151};
...
let p = stm32l151::Peripherals::take().unwrap();
p.RCC.ahbenr.modify(|_,w| w.gpiopben().set_bit());
But in the source code of stm32l1xx-hal I see that the RCC part is already done in impl GpioExt for $GPIOX, but I don't know how to get to this "Parts" functionality.
Any help that points me in the right direction is appreciated.
I got some help from a Discord community. The answers were (modified a bit by me):
stm32l1xx-hal already depends on stm32l1 as seen here. There's no need to import it twice. It is enough to add to Cargo.toml:
[dependencies.stm32l1xx-hal]
version = "0.1.0"
default-features = false
features = ["stm32l151", "rt"]
Note that the default-features = false is optional, but without it the compiler was giving me an error.
The syntaxes are equivalent, but as I said above, I only need to add the HAL one. You can add curly braces {} in the first style to add the options, such as:
stm32l1xx-hal = { version = "0.1.0", features = ["stm32l151", "rt"]}
The right code for doing the blinky (which was on PB6, not PB4, sigh) was:
#![no_main]
#![no_std]
use panic_halt as _;
use cortex_m_rt::entry;
use stm32l1xx_hal::delay::Delay;
use stm32l1xx_hal::gpio::GpioExt;
use stm32l1xx_hal::hal::digital::v2::OutputPin;
use stm32l1xx_hal::rcc::{Config, RccExt};
use stm32l1xx_hal::stm32::Peripherals;
use stm32l1xx_hal::stm32::CorePeripherals;
use stm32l1xx_hal::time::MicroSeconds;
#[entry]
fn main() -> ! {
let p = Peripherals::take().unwrap();
let cp = CorePeripherals::take().unwrap();
// Get LED pin PB6
let gpiob = p.GPIOB.split();
let mut led = gpiob.pb6.into_push_pull_output();
// Set up a delay
let rcc = p.RCC.freeze(Config::default());
let mut delay = Delay::new(cp.SYST, rcc.clocks);
loop {
// Turn LED On
led.set_high().unwrap();
delay.delay(MicroSeconds(1_000_000_u32));
// Turn LED Off
led.set_low().unwrap();
delay.delay(MicroSeconds(1_000_000_u32));
}
}
For me, the key was to understand that the split method can be called from the Peripherals because stm32l1xx-hal implemented split for a struct defined in stm32l1. In other words, the HAL crate is not only defining new structs, but also extending functionality for existing structs. I need to wrap my head around trait design patterns.
I'm very new to rust's syntax of using libraries. Well, mostly new to rust overall.
I've included a library, that is unfinished, and seemingly doesn't work. The library is called "hours", the lib.rs contains the following:
// #[derive(Clone, Debug, PartialEq, Eq)]
pub struct Hours {
pub rules: Vec<types::RuleSequence>,
pub tz: Tz,
}
impl Hours {
pub fn from(s: &str, tz: Tz) -> Result<Self, String> {
//... abbreviated
}
pub fn at(self: &Self, dt: &DateTime<Tz>) -> types::Modifier {
//... abbreviated
}
}
It is included in Cargo.toml: relevant lines:
edition = "2018"
[dependencies]
hours = "0.0.1"
I want to know if it is possible to include and use the from() function, so far, I'm without luck. Here's what I tried:
use hours;
fn main() {
//... abbreviated
let hours = Hours::from(example_hrs, Amsterdam).unwrap();
}
Gives compile error: Hours::from(example_hrs, Amsterdam).unwrap(); ^^^^^ use of undeclared type or moduleHours``
use hours::Hours;
fn main() {
//... abbreviated
let hours = Hours::from(example_hrs, Amsterdam).unwrap();
}
Gives compile error: use hours::Hours; ^^^^^^^^^^^^ no ``Hours`` in the root
use hours;
fn main() {
//... abbreviated
let hours = hours::Hours::from(example_hrs, Amsterdam).unwrap();
}
Gives compile error: hours::Hours::from(example_hrs, Amsterdam).unwrap(); ^^^^^ could not findHoursinhours``
Is there any way to include and use this library? Do I need to change the library, or am I simply using it wrong?
Problem in here, the code in repository link you've shared doesn't match with the dependency in crates.io and naturally Rust can't find the required api components. In this case owner of the crate hasn't published the code in gitlab yet.
To see that you can quickly check the source from the docs.rs. This is the link for the required dependency docs.rs/crate/hours/0.0.1/source/.
If you want to use the current code in the repository
you may have it locally by downloading(or using git clone) then you can use it by specifying the path in cargo.toml
Or define git repository directly in cargo toml.
hours = { git = "https://gitlab.com/alantrick/hours.git", rev="7b7d369796c209db7b61db71aa7396f2ec59f942"}
Adding revision number or tag might help since the updates on master branch may break the compatibility.
Why this source in docs.rs is accurate with crates.io ?
Please check the about section in docs.rs :
Docs.rs automatically builds crates' documentation released on
crates.io using the nightly release of the Rust compiler
This means it is synchronized with crates.io.
To be sure you can also check the crate's source from a local repository cache.
## Note that this path is built with default cargo settings
$HOME/.cargo/registry/src/github.com-1ecc6299db9ec823/hours-0.0.1
^^^^^^^^^^^^^^^^^^^^^^^^^^^ github reigstry for crates io
Why crate's repository link in crates.io doesn't match with the crate's source
Please check the publishing crate reference, repository information is specified in package's metadata(in cargo.toml).
According to package metadata reference these information are used for :
These URLs point to more information about the package. These are
intended to be webviews of the relevant data, not necessarily compatible
with VCS tools and the like.
documentation = "..."
homepage = "..."
repository = "..."
You may also check the popular crates as well, they point their github(usually) main page which points master branch, not a tag of the current version.
I would like to use the nix crate in a project.
However, this project also has an acceptable alternative implementation for OSX and Windows, where I would like to use a different crate.
What is the current way of expressing that I only want nix in Linux platforms?
There's two steps you need to make a dependency completely target-specific.
First, you need to specify this in your Cargo.toml, like so:
[target.'cfg(target_os = "linux")'.dependencies]
nix = "0.5"
This will make Cargo only include the dependency when that configuration is active. However, this means that on non-Linux OS, you'll get a compile error for every spot you use nix in your code. To remedy this, annotate those usages with a cfg attribute, like so:
#[cfg(target_os = "linux")]
use nix::foo;
Of course that has rippling effects as now other code using those items fails to compile as the import, function, module or whatever doesn't exist on non-Linux. One common way to deal with that is to put all usages of nix into one function and use a no-op function on all other OSes. For example:
#[cfg(target_os = "linux")]
fn do_stuff() {
nix::do_something();
}
#[cfg(not(target_os = "linux"))]
fn do_stuff() {}
fn main() {
do_stuff();
}
With this, on all platforms, the function do_stuff exists and can be called. Of course, you have to decide for yourself what the function should do on non Linux.