After read this document(https://doc.rust-lang.org/nomicon/subtyping.html) in Rust, I'm curious about how to write an example about struct variance taking effect.
struct MyType<'b> {
z: UnsafeCell<&'b f64>,
}
fn check(my_type: MyType<'static>) {}
#[test]
fn it_works() {
{
let f = &1f64;
let unsafe_cell = UnsafeCell::new(f);
// this my_type 's lifetime scope is not 'static
let my_type = MyType { z: unsafe_cell };
// why this check method pass through compilation??
check(my_type);
}
}
The problem is that the lifetime of &1f64 is static. This is because 1 is just a constant stored in the executable, so you can safely take a shared reference to it and expect it to last until the end of the program.
If you change the declaration of f to something like:
let v = 1f64;
let f = &v;
...build will fail with the expected "borrowed value does not live long enough".
Related
Rust noob here reading about explicit annotation of lifetimes. I see there's lots of questions about it here, so I hope this post isn't duplicating.
For me, this answers the question why the compiler needs such annotations but does not answer why the user needs it. The example in the answer is:
struct Foo<'a> {
x: &'a i32,
}
fn main() {
let f : Foo;
{
let n = 5; // variable that is invalid outside this block
let y = &n;
f = Foo { x: y };
};
println!("{}", f.x);
}
And I can see y goes out of scope before f, but I can't think of a case where I'd want a struct with some field that could go out of scope before the parent struct? Why would anyone need that? And by extension of that question, why not just always enforce that the fields lives as long as the parent struct?
The same question applies to functions. I can't see why I would want some function that takes some borrowed variables as an argument but the variable don't live through the entire function? The only scenario I could think of is when there's some concurrency involved.
The lifetime annotation is not for Foo but for things that use the reference, outside/after it.
Take this example:
struct Foo<'a> {
x: &'a i32,
}
fn main() {
let n = 5;
let z;
{
let f = Foo { x: &n };
z = f.x;
};
println!("{}", z);
}
If Foo did not keep track of the lifetime then there would be no way for the compiler to know z is valid even after f went out of scope.
In other words the annotation is there to 'extend' the lifetime of references beyond the struct they're contained in.
I was reading up code for anyhow crate in Rust. There is a particular line I don't fully grasp:
{
let vtable = &ErrorVTable { ... };
construct(vtable, ...);
}
fn construct(vtable: &'static ErrorVTable, ...);
We seem to create an ErrorVTable struct, and return a reference to it which has lifetime `static. I'd expect compiler to create a struct on function stack, and return a reference to it, causing weird memory issues.
But it appears like compiler detects that this variable for all possible E inferred on compile time and somehow creates static variables for them? How does this actually work?
Consider this simplified code:
struct Foo {
x: i32
}
fn test(_: &'static Foo) {}
fn main() {
let f = Foo{ x: 42 };
test(&f);
}
As expected, it does not compile, with the message:
f does not live long enough
However this slightly variation does compile:
fn main() {
let f = &Foo{ x: 42 };
test(f);
}
The difference is that in the former, the Foo object is local, with a local lifetime, so no 'static reference can be built. But in the latter, the actual object is a static constant so it has static lifetime, and f is just a reference to it.
To help see the difference, consider this other equivalent code:
const F: Foo = Foo{ x: 42 };
fn main() {
test(&F);
}
Or if you use an actual constant literal:
fn test_2(_: &'static i32) {}
fn main() {
let i = &42;
test_2(&i);
}
Naturally, this only works if all the arguments of the Foo construction are constant. If any value is not constant, then the compiler will silently switch to a local temporary instead of a static constant and you will lose the 'static lifetime.
The precise rules for this constant promotion, as it is sometimes called, are a bit complicated and may be extended in newer compiler versions.
Here is a simplified version of what I want to archive:
struct Foo<'a> {
boo: Option<&'a mut String>,
}
fn main() {
let mut foo = Foo { boo: None };
{
let mut string = "Hello".to_string();
foo.boo = Some(&mut string);
foo.boo.unwrap().push_str(", I am foo!");
foo.boo = None;
} // string goes out of scope. foo does not reference string anymore
} // foo goes out of scope
This is obviously completely safe as foo.boo is None once string goes out of scope.
Is there a way to tell this to the compiler?
This is obviously completely safe
What is obvious to humans isn't always obvious to the compiler; sometimes the compiler isn't as smart as humans (but it's way more vigilant!).
In this case, your original code compiles when non-lexical lifetimes are enabled:
#![feature(nll)]
struct Foo<'a> {
boo: Option<&'a mut String>,
}
fn main() {
let mut foo = Foo { boo: None };
{
let mut string = "Hello".to_string();
foo.boo = Some(&mut string);
foo.boo.unwrap().push_str(", I am foo!");
foo.boo = None;
} // string goes out of scope. foo does not reference string anymore
} // foo goes out of scope
This is only because foo is never used once it would be invalid (after string goes out of scope), not because you set the value to None. Trying to print out the value after the innermost scope would still result in an error.
Is it possible to have a struct which contains a reference to a value which has a shorter lifetime than the struct?
The purpose of Rust's borrowing system is to ensure that things holding references do not live longer than the referred-to item.
After non-lexical lifetimes
Maybe, so long as you don't make use of the reference after it is no longer valid. This works, for example:
#![feature(nll)]
struct Foo<'a> {
boo: Option<&'a mut String>,
}
fn main() {
let mut foo = Foo { boo: None };
// This lives less than `foo`
let mut string1 = "Hello".to_string();
foo.boo = Some(&mut string1);
// This lives less than both `foo` and `string1`!
let mut string2 = "Goodbye".to_string();
foo.boo = Some(&mut string2);
}
Before non-lexical lifetimes
No. The borrow checker is not smart enough to tell that you cannot / don't use the reference after it would be invalid. It's overly conservative.
In this case, you are running into the fact that lifetimes are represented as part of the type. Said another way, the generic lifetime parameter 'a has been "filled in" with a concrete lifetime value covering the lines where string is alive. However, the lifetime of foo is longer than those lines, thus you get an error.
The compiler does not look at what actions your code takes; once it has seen that you parameterize it with that specific lifetime, that's what it is.
The usual fix I would reach for is to split the type into two parts, those that need the reference and those that don't:
struct FooCore {
size: i32,
}
struct Foo<'a> {
core: FooCore,
boo: &'a mut String,
}
fn main() {
let core = FooCore { size: 42 };
let core = {
let mut string = "Hello".to_string();
let foo = Foo { core, boo: &mut string };
foo.boo.push_str(", I am foo!");
foo.core
}; // string goes out of scope. foo does not reference string anymore
} // foo goes out of scope
Note how this removes the need for the Option — your types now tell you if the string is present or not.
An alternate solution would be to map the whole type when setting the string. In this case, we consume the whole variable and change the type by changing the lifetime:
struct Foo<'a> {
boo: Option<&'a mut String>,
}
impl<'a> Foo<'a> {
fn set<'b>(self, boo: &'b mut String) -> Foo<'b> {
Foo { boo: Some(boo) }
}
fn unset(self) -> Foo<'static> {
Foo { boo: None }
}
}
fn main() {
let foo = Foo { boo: None };
let foo = {
let mut string = "Hello".to_string();
let mut foo = foo.set(&mut string);
foo.boo.as_mut().unwrap().push_str(", I am foo!");
foo.unset()
}; // string goes out of scope. foo does not reference string anymore
} // foo goes out of scope
Shepmaster's answer is completely correct: you can't express this with lifetimes, which are a compile time feature. But if you're trying to replicate something that would work in a managed language, you can use reference counting to enforce safety at run time.
(Safety in the usual Rust sense of memory safety. Panics and leaks are still possible in safe Rust; there are good reasons for this, but that's a topic for another question.)
Here's an example (playground). Rc pointers disallow mutation, so I had to add a layer of RefCell to imitate the code in the question.
use std::rc::{Rc,Weak};
use std::cell::RefCell;
struct Foo {
boo: Weak<RefCell<String>>,
}
fn main() {
let mut foo = Foo { boo: Weak::new() };
{
// create a string with a shorter lifetime than foo
let string = "Hello".to_string();
// move the string behind an Rc pointer
let rc1 = Rc::new(RefCell::new(string));
// weaken the pointer to store it in foo
foo.boo = Rc::downgrade(&rc1);
// accessing the string
let rc2 = foo.boo.upgrade().unwrap();
assert_eq!("Hello", *rc2.borrow());
// mutating the string
let rc3 = foo.boo.upgrade().unwrap();
rc3.borrow_mut().push_str(", I am foo!");
assert_eq!("Hello, I am foo!", *rc3.borrow());
} // rc1, rc2 and rc3 go out of scope and string is automatically dropped.
// foo.boo now refers to a dropped value and cannot be upgraded anymore.
assert!(foo.boo.upgrade().is_none());
}
Notice that I didn't have to reassign foo.boo before string went out of scope, like in your example -- the Weak pointer is automatically marked invalid when the last extant Rc pointer is dropped. This is one way in which Rust's type system still helps you enforce memory safety even after dropping the strong compile-time guarantees of shared & pointers.
I've been playing around with AudioUnit via Rust and the Rust library coreaudio-rs. Their example seems to work well:
extern crate coreaudio;
use coreaudio::audio_unit::{AudioUnit, IOType};
use coreaudio::audio_unit::render_callback::{self, data};
use std::f32::consts::PI;
struct Iter {
value: f32,
}
impl Iterator for Iter {
type Item = [f32; 2];
fn next(&mut self) -> Option<[f32; 2]> {
self.value += 440.0 / 44_100.0;
let amp = (self.value * PI * 2.0).sin() as f32 * 0.15;
Some([amp, amp])
}
}
fn main() {
run().unwrap()
}
fn run() -> Result<(), coreaudio::Error> {
// 440hz sine wave generator.
let mut samples = Iter { value: 0.0 };
//let buf: Vec<[f32; 2]> = vec![[0.0, 0.0]];
//let mut samples = buf.iter();
// Construct an Output audio unit that delivers audio to the default output device.
let mut audio_unit = try!(AudioUnit::new(IOType::DefaultOutput));
// Q: What is this type?
let callback = move |args| {
let Args { num_frames, mut data, .. } = args;
for i in 0..num_frames {
let sample = samples.next().unwrap();
for (channel_idx, channel) in data.channels_mut().enumerate() {
channel[i] = sample[channel_idx];
}
}
Ok(())
};
type Args = render_callback::Args<data::NonInterleaved<f32>>;
try!(audio_unit.set_render_callback(callback));
try!(audio_unit.start());
std::thread::sleep(std::time::Duration::from_millis(30000));
Ok(())
}
However, changing it up a little bit to load via a buffer doesn't work as well:
extern crate coreaudio;
use coreaudio::audio_unit::{AudioUnit, IOType};
use coreaudio::audio_unit::render_callback::{self, data};
fn main() {
run().unwrap()
}
fn run() -> Result<(), coreaudio::Error> {
let buf: Vec<[f32; 2]> = vec![[0.0, 0.0]];
let mut samples = buf.iter();
// Construct an Output audio unit that delivers audio to the default output device.
let mut audio_unit = try!(AudioUnit::new(IOType::DefaultOutput));
// Q: What is this type?
let callback = move |args| {
let Args { num_frames, mut data, .. } = args;
for i in 0..num_frames {
let sample = samples.next().unwrap();
for (channel_idx, channel) in data.channels_mut().enumerate() {
channel[i] = sample[channel_idx];
}
}
Ok(())
};
type Args = render_callback::Args<data::NonInterleaved<f32>>;
try!(audio_unit.set_render_callback(callback));
try!(audio_unit.start());
std::thread::sleep(std::time::Duration::from_millis(30000));
Ok(())
}
It says, correctly so, that buf only lives until the end of run and does not live long enough for the audio unit—which makes sense, because "borrowed value must be valid for the static lifetime...".
In any case, that doesn't bother me; I can modify the iterator to load and read from the buffer just fine. However, it does raise some questions:
Why does the Iter { value: 0.0 } have the 'static lifetime?
If it doesn't have the 'static lifetime, why does it say the borrowed value must be valid for the 'static lifetime?
If it does have the 'static lifetime, why? It seems like it would be on the heap and closed on by callback.
I understand that the move keyword allows moving inside the closure, which doesn't help me understand why it interacts with lifetimes. Why can't it move the buffer? Do I have to move both the buffer and the iterator into the closure? How would I do that?
Over all this, how do I figure out the expected lifetime without trying to be a compiler myself? It doesn't seem like guessing and compiling is always a straightforward method to resolving these issues.
Why does the Iter { value: 0.0 } have the 'static lifetime?
It doesn't; only references have lifetimes.
why does it say the borrowed value must be valid for the 'static lifetime
how do I figure out the expected lifetime without trying to be a compiler myself
Read the documentation; it tells you the restriction:
fn set_render_callback<F, D>(&mut self, f: F) -> Result<(), Error>
where
F: FnMut(Args<D>) -> Result<(), ()> + 'static, // <====
D: Data
This restriction means that any references inside of F must live at least as long as the 'static lifetime. Having no references is also acceptable.
All type and lifetime restrictions are expressed at the function boundary — this is a hard rule of Rust.
I understand that the move keyword allows moving inside the closure, which doesn't help me understand why it interacts with lifetimes.
The only thing that the move keyword does is force every variable directly used in the closure to be moved into the closure. Otherwise, the compiler tries to be conservative and move in references/mutable references/values based on the usage inside the closure.
Why can't it move the buffer?
The variable buf is never used inside the closure.
Do I have to move both the buffer and the iterator into the closure? How would I do that?
By creating the iterator inside the closure. Now buf is used inside the closure and will be moved:
let callback = move |args| {
let mut samples = buf.iter();
// ...
}
It doesn't seem like guessing and compiling is always a straightforward method to resolving these issues.
Sometimes it is, and sometimes you have to think about why you believe the code to be correct and why the compiler states it isn't and come to an understanding.
I have a struct which references a value (because it is ?Sized or very big). This value has to live with the struct, of course.
However, the struct shouldn't restrict the user on how to accomplish that. Whether the user wraps the value in a Box or Rc or makes it 'static, the value just has to survive with the struct. Using named lifetimes would be complicated because the reference will be moved around and may outlive our struct. What I am looking for is a general pointer type (if it exists / can exist).
How can the struct make sure the referenced value lives as long as the struct lives, without specifying how?
Example (is.gd/Is9Av6):
type CallBack = Fn(f32) -> f32;
struct Caller {
call_back: Box<CallBack>,
}
impl Caller {
fn new(call_back: Box<CallBack>) -> Caller {
Caller {call_back: call_back}
}
fn call(&self, x: f32) -> f32 {
(self.call_back)(x)
}
}
let caller = {
// func goes out of scope
let func = |x| 2.0 * x;
Caller {call_back: Box::new(func)}
};
// func survives because it is referenced through a `Box` in `caller`
let y = caller.call(1.0);
assert_eq!(y, 2.0);
Compiles, all good. But if we don't want to use a Box as a pointer to our function (one can call Box a pointer, right?), but something else, like Rc, this wont be possible, since Caller restricts the pointer to be a Box.
let caller = {
// function is used by `Caller` and `main()` => shared resource
// solution: `Rc`
let func = Rc::new(|x| 2.0 * x);
let caller = Caller {call_back: func.clone()}; // ERROR Rc != Box
// we also want to use func now
let y = func(3.0);
caller
};
// func survives because it is referenced through a `Box` in `caller`
let y = caller.call(1.0);
assert_eq!(y, 2.0);
(is.gd/qUkAvZ)
Possible solution: Deref? (http://is.gd/mmY6QC)
use std::rc::Rc;
use std::ops::Deref;
type CallBack = Fn(f32) -> f32;
struct Caller<T>
where T: Deref<Target = Box<CallBack>> {
call_back: T,
}
impl<T> Caller<T>
where T: Deref<Target = Box<CallBack>> {
fn new(call_back: T) -> Caller<T> {
Caller {call_back: call_back}
}
fn call(&self, x: f32) -> f32 {
(*self.call_back)(x)
}
}
fn main() {
let caller = {
// function is used by `Caller` and `main()` => shared resource
// solution: `Rc`
let func_obj = Box::new(|x: f32| 2.0 * x) as Box<CallBack>;
let func = Rc::new(func_obj);
let caller = Caller::new(func.clone());
// we also want to use func now
let y = func(3.0);
caller
};
// func survives because it is referenced through a `Box` in `caller`
let y = caller.call(1.0);
assert_eq!(y, 2.0);
}
Is this the way to go with Rust? Using Deref? It works at least.
Am I missing something obvious?
This question did not solve my problem, since the value is practically unusable as a T.
While Deref provides the necessary functionality, AsRef and Borrow are more appropriate for this situation (Borrow more so than AsRef in the case of a struct). Both of these traits let your users use Box<T>, Rc<T> and Arc<T>, and Borrow also lets them use &T and T. Your Caller struct could be written like this:
use std::borrow::Borrow;
struct Caller<CB: Borrow<Callback>> {
callback: CB,
}
Then, when you want to use the callback field, you need to call the borrow() (or as_ref()) method:
impl<CB> Caller<CB>
where CB: Borrow<Callback>
{
fn new(callback: CB) -> Caller<CB> {
Caller { callback: callback }
}
fn call(&self, x: f32) -> f32 {
(self.callback.borrow())(x)
}
}
It crashes with the current stable compiler (1.1), but not with beta or nightly (just use your last Playpen link and change the "Channel" setting at the top). I believe that support for Rc<Trait> was only partial in 1.1; there were some changes that didn't make it in time. This is probably why your code doesn't work.
To address the question of using Deref for this... if dereferencing the pointer is all you need... sure. It's really just a question of whether or not the trait(s) you've chosen support the operations you need. If yes, great.
As an aside, you can always write a new trait that expresses the exact semantics you need, and implement that for existing types. From what you've said, it doesn't seem necessary in this case.