Can't get RwLock to work in multithreaded app - rust

I am trying to implement tokio::sync::RwLock because my app should re-load a configuration file if/when it changes. This involves re-assigning a Vec variable with a new one, but there could be many readers at any one time.
There is a struct called Manifest that represents a "list" of binaries available for a piece of hardware that I designed. The list is a JSON file which is parsed at startup and whenever it changes.
use notify::{watcher, RecursiveMode, Watcher};
use semver::Version;
use serde::{Deserialize, Serialize};
use std::sync::mpsc::channel;
use std::time::Duration;
use std::{fs::File, sync::Arc, thread};
use tokio::sync::RwLock;
#[derive(Serialize, Deserialize, Debug, Clone)]
pub struct Binary {
version: Version,
hardware: Version,
requires: Version,
file: String,
}
pub struct Manifest {
path: String,
manifest_changed: bool,
binaries: Arc<RwLock<Vec<Binary>>>,
}
impl Manifest {
pub fn new(path: String) -> Manifest {
let bins = Arc::new(RwLock::new(Vec::new()));
thread::spawn(move || async move {
let bins = Arc::clone(&bins);
let (tx, rx) = channel();
let mut watcher = watcher(tx, Duration::from_secs(10)).unwrap();
watcher.watch(path, RecursiveMode::NonRecursive).unwrap();
loop {
match rx.recv() {
Ok(event) => {
let file = File::open(path).unwrap();
let mut bins = bins.write().await;
*bins = serde_json::from_reader(file).unwrap();
}
Err(e) => println!("watch error: {:?}", e),
}
}
});
Manifest {
path: path,
manifest_changed: true,
binaries: bins,
}
}
pub async fn GetAvailableBinaries(self, hwv: Version, fwv: Version) -> Vec<Binary> {
self.binaries.read().await.to_vec();
}
}
My issue is that the compiler complains:
future cannot be sent between threads safely
future created by async block is not `Send`
help: the trait `std::marker::Sync` is not implemented for `std::sync::mpsc::Receiver<notify::DebouncedEvent>`rustc
manifest.rs(28, 9): future created by async block is not `Send`
manifest.rs(35, 23): has type `&std::sync::mpsc::Receiver<notify::DebouncedEvent>` which is not `Send`
manifest.rs(38, 40): await occurs here, with `rx` maybe used later
manifest.rs(44, 13): `rx` is later dropped here
mod.rs(617, 8): required by this bound in `std::thread::spawn`
I'm not sure I understand why it matters if the channel Receiver implements Send because it's being created in the closure/thread. Further that that reasoning, I really don't know what to look for.

Related

How to transfer program state, like a window position, to the next run?

What would be the proper way to store tiny bit of data to reuse?
I am thinking about creating options.json file. Any instruments for it?
Another options?
I was a bit bored, so here is the solution you were looking for. It attempts to save an options.json file in the same directory as the executable.
use serde::{Serialize, Deserialize};
use serde::de::DeserializeOwned;
use std::env::current_exe;
use std::io::{self, BufReader, BufWriter, Error, ErrorKind};
use std::fs::File;
pub fn save_data_for_next<D: Serialize>(data: &D) -> io::Result<()> {
let options_path = current_exe()?.parent().unwrap().join("options.json");
let writer = BufWriter::new(File::create(options_path)?);
serde_json::to_writer(writer, data).map_err(|e| Error::new(ErrorKind::Other, e))
}
pub fn load_previous_data<D: DeserializeOwned>() -> io::Result<Option<D>> {
let options_path = current_exe()?.parent().unwrap().join("options.json");
if !options_path.is_file() {
return Ok(None)
}
let reader = BufReader::new(File::open(options_path)?);
match serde_json::from_reader(reader) {
Ok(v) => Ok(Some(v)),
Err(e) => Err(Error::new(ErrorKind::Other, e))
}
}
Then all you need to do to use it is to derive Serialize and Deserialize on some type. Alternatively you could use serde_json::Value, then it would be able to safely save/load any arbitrary JSON values. However, you may need to manually delete the options.json file when you change the contents of Options since it may panic upon failing to parse the previous version.
use serde::{Serialize, Deserialize};
#[derive(Serialize, Deserialize)]
struct Options {
window_position: (u32, u32),
}
pub fn main() {
let mut options = match load_previous_data::<Options>().unwrap() {
Some(v) => v,
// Options has not been created yet, so create some default config
None => Options {
window_size: (500, 500),
},
};
// Run program
// Save options before exiting
save_data_for_next(&options).unwrap();
}

NetworkBehaviour unsatisfied trait bounds

I'm a beginner at Rust and I've been following this tutorial on creating a simple blockchain using Rust.
chain.rs
use byteorder::{BigEndian, ReadBytesExt};
use chrono::offset::Utc;
use serde::{Deserialize, Serialize};
use sha2::{Digest, Sha256};
use std::io::Cursor;
// Represents the entire chain in the network
pub struct Chain {
// Actual chain
pub blocks: Vec<Block>,
}
// Represents a single block in the blockchain
#[derive(Serialize, Deserialize, Debug, Clone)]
pub struct Block {
pub id: u64,
// Hash representing block
pub hash: String,
// Hash of the previous block
pub previous_hash: String,
// Time of creation in UTC
pub timestamp: i64,
// Data contained in the block
pub data: String,
// Value for hashing the block(PoW)
pub nonce: u64,
}
p2p.rs
use std::collections::HashSet;
use super::chain::{Block, Chain};
use libp2p::{
floodsub::{Floodsub, FloodsubEvent, Topic},
identity::Keypair,
mdns::{Mdns, MdnsEvent},
swarm::{NetworkBehaviourEventProcess, Swarm},
NetworkBehaviour, PeerId,
};
use log::{error, info};
use once_cell::sync::Lazy;
use serde::{Deserialize, Serialize};
use serde_json;
use tokio::sync::mpsc;
// Topics for pub/sub protocol
// Impairements: Broadcasts on each request thus extremely inefficient
pub static BLOCK_TOPIC: Lazy<Topic> = Lazy::new(|| Topic::new("blocks"));
pub static CHAIN_TOPIC: Lazy<Topic> = Lazy::new(|| Topic::new("chains"));
// Key Pair for peer identification on network
pub static KEYS: Lazy<Keypair> = Lazy::new(Keypair::generate_ed25519);
// Peer id for peer identification on network
pub static PEER_ID: Lazy<PeerId> = Lazy::new(|| PeerId::from(KEYS.public()));
...
// Defines overall NetworkBehaviour for the chain
#[derive(NetworkBehaviour)]
pub struct ChainBehaviour {
// Chain
#[behaviour(ignore)]
pub chain: Chain,
// Handles FloodSub protocol
pub floodsub: Floodsub,
// Sends response to the UnboundedReceiver
#[behaviour(ignore)]
pub init_sender: mpsc::UnboundedSender<bool>,
// Handles automatic discovery of peers on the local network
// and adds them to the topology
pub mdns: Mdns,
// Sends response to the UnboundedReceiver
#[behaviour(ignore)]
pub response_sender: mpsc::UnboundedSender<ChainResponse>,
}
#[derive(Debug, Deserialize, Serialize)]
pub struct ChainResponse {
pub blocks: Vec<Block>,
pub receiver: String,
}
// Triggers chain communication for requested ID
#[derive(Debug, Deserialize, Serialize)]
pub struct LocalChainRequest {
pub from_peer_id: String,
}
// Keep states for handling incoming messages, lazy init
// and keyboard input by the client's user
pub enum EventType {
LocalChainRequest(ChainResponse),
Input(String),
Init,
}
// Implemnt FloodsubEvent for ChainBehaviour
impl NetworkBehaviourEventProcess<FloodsubEvent> for ChainBehaviour {
fn inject_event(&mut self, event: FloodsubEvent) {
if let FloodsubEvent::Message(msg) = event {
// If message is of type ChainResponse and that the message is ours,
// we execute our consensus.
if let Ok(response) = serde_json::from_slice::<ChainResponse>(&msg.data) {
if response.receiver == PEER_ID.to_string() {
info!("Response from {}:", msg.source);
response.blocks.iter().for_each(|r| info!("{:?}", r));
self.chain.blocks = self
.chain
.choose(self.chain.blocks.clone(), response.blocks);
}
} else if let Ok(response) = serde_json::from_slice::<LocalChainRequest>(&msg.data) {
// If of type LocalChainRequest, we send ChainResponse to
// initiator
info!("sending local chain to {}", msg.source.to_string());
let peer_id = response.from_peer_id;
if PEER_ID.to_string() == peer_id {
if let Err(e) = self.response_sender.send(ChainResponse {
blocks: self.chain.blocks.clone(),
receiver: msg.source.to_string(),
}) {
error!("error sending response via channel, {}", e);
};
}
} else if let Ok(block) = serde_json::from_slice::<Block>(&msg.data) {
// If of type Block, we try adding the block if valid
info!("received new block from {}", msg.source.to_string());
self.chain.try_add_block(block);
}
}
}
}
// Implement MdnsEvents for ChainBehaviour
impl NetworkBehaviourEventProcess<MdnsEvent> for ChainBehaviour {
fn inject_event(&mut self, event: MdnsEvent) {
match event {
// Add node to list of nodes when discovered
MdnsEvent::Discovered(nodes) => {
for (peer, _) in nodes {
self.floodsub.add_node_to_partial_view(peer)
}
}
// Remove node from list of nodes when TTL expires and
// address hasn't been refreshed
MdnsEvent::Expired(nodes) => {
for (peer, _) in nodes {
if !self.mdns.has_node(&peer) {
self.floodsub.remove_node_from_partial_view(&peer);
}
}
}
}
}
}
Here's my cargo.toml:
byteorder = "1"
chrono = "0.4.19"
getrandom = "0.2.3"
hex = "0.4.3"
libp2p = {version = "0.41.0", features = ['tcp-tokio', "mdns"]}
log = "0.4.14"
once_cell = "1.9.0"
oorandom = "11.1.3"
pretty_env_logger = "0.4.0"
serde = {version = "1.0.133", features = ["derive"]}
serde_json = "1.0.74"
sha2 = "0.10.0"
tokio = { version = "1.15.0", features = ["io-util", "io-std", "macros", "rt", "rt-multi-thread", "sync", "time"] }
I keep getting the following error on this struct:
error[E0277]: the trait bound `(): From<MdnsEvent>` is not satisfied
--> src/p2p.rs:30:10
|
30 | #[derive(NetworkBehaviour)]
| ^^^^^^^^^^^^^^^^ the trait `From<MdnsEvent>` is not implemented for `()`
|
= help: see issue #48214
= note: this error originates in the derive macro `NetworkBehaviour` (in Nightly builds, run with -Z macro-backtrace for more info)
error[E0277]: the trait bound `(): From<FloodsubEvent>` is not satisfied
--> src/p2p.rs:30:10
|
30 | #[derive(NetworkBehaviour)]
| ^^^^^^^^^^^^^^^^ the trait `From<FloodsubEvent>` is not implemented for `()`
|
= help: see issue #48214
= note: this error originates in the derive macro `NetworkBehaviour` (in Nightly builds, run with -Z macro-backtrace for more info)
For more information about this error, try `rustc --explain E0277`.
I tried following the suggestions from this forum and got the following error:
error[E0599]: no method named `into_inner` found for struct `OneShotHandler` in the current scope
--> src/p2p.rs:30:10
|
30 | #[derive(NetworkBehaviour)]
| ^^^^^^^^^^^^^^^^ method not found in `OneShotHandler<FloodsubProtocol, FloodsubRpc, floodsub::layer::InnerMessage>`
|
= note: this error originates in the derive macro `NetworkBehaviour` (in Nightly builds, run with -Z macro-backtrace for more info)
I also keep getting an add reference here error on #[derive(NetworkBehaviour)]. What could be causing the error and how can I fix it? I'm using rust-analyzer.
Link to the tutorial's github repo.
Link to my github repo. The entire code is rather large but should only the afforementioned errors.
This question is nearly a year old, but maybe will run into this again in the future.
I stumbled over the same issue, funnily enough while working through the same tutorial. It seems that newer versions of libp2p require you to do two things that the version used for the tutorial did not seem to require:
Specify #[behaviour(out_event="Event")] For the AppBehaviour struct
This is mentioned to be optional in the docs, but if you don't specify this then the macro will use StructName<Event>.
Implement the trait From<> for the enum Event for all events emitted by struct members of AppBehaviour, as the events emitted by the struct members are wrapped in the event enum.
I've added two new enum values for this as shown in the libp2p docs:
use crate::net::{ChainResponse};
use libp2p::{floodsub::FloodsubEvent, mdns::MdnsEvent};
pub enum Event {
ChainResponse(ChainResponse),
Floodsub(FloodsubEvent),
Mdns(MdnsEvent),
Input(String),
Init,
}
impl From<FloodsubEvent> for Event {
fn from(event: FloodsubEvent) -> Self {
Self::Floodsub(event)
}
}
impl From<MdnsEvent> for Event {
fn from(event: MdnsEvent) -> Self {
Self::Mdns(event)
}
}

What is the Rust equivalent of C++'s shared_from_this?

I have an object that I know that is inside an Arc because all the instances are always Arced. I would like to be able to pass a cloned Arc of myself in a function call. The thing I am calling will call me back later on other threads.
In C++, there is a standard mixin called enable_shared_from_this. It enables me to do exactly this
class Bus : public std::enable_shared_from_this<Bus>
{
....
void SetupDevice(Device device,...)
{
device->Attach(shared_from_this());
}
}
If this object is not under shared_ptr management (the closest C++ has to Arc) then this will fail at run time.
I cannot find an equivalent.
EDIT:
Here is an example of why its needed. I have a timerqueue library. It allows a client to request an arbitrary closure to be run at some point in the future. The code is run on a dedicated thread. To use it you must pass a closure of the function you want to be executed later.
use std::time::{Duration, Instant};
use timerqueue::*;
use parking_lot::Mutex;
use std::sync::{Arc,Weak};
use std::ops::{DerefMut};
// inline me keeper cos not on github
pub struct MeKeeper<T> {
them: Mutex<Weak<T>>,
}
impl<T> MeKeeper<T> {
pub fn new() -> Self {
Self {
them: Mutex::new(Weak::new()),
}
}
pub fn save(&self, arc: &Arc<T>) {
*self.them.lock().deref_mut() = Arc::downgrade(arc);
}
pub fn get(&self) -> Arc<T> {
match self.them.lock().upgrade() {
Some(arc) => return arc,
None => unreachable!(),
}
}
}
// -----------------------------------
struct Test {
data:String,
me: MeKeeper<Self>,
}
impl Test {
pub fn new() -> Arc<Test>{
let arc = Arc::new(Self {
me: MeKeeper::new(),
data: "Yo".to_string()
});
arc.me.save(&arc);
arc
}
fn task(&self) {
println!("{}", self.data);
}
// in real use case the TQ and a ton of other status data is passed in the new call for Test
// to keep things simple here the 'container' passes tq as an arg
pub fn do_stuff(&self, tq: &TimerQueue) {
// stuff includes a async task that must be done in 1 second
//.....
let me = self.me.get().clone();
tq.queue(
Box::new(move || me.task()),
"x".to_string(),
Instant::now() + Duration::from_millis(1000),
);
}
}
fn main() {
// in real case (PDP11 emulator) there is a Bus class owning tons of objects thats
// alive for the whole duration
let tq = Arc::new(TimerQueue::new());
let test = Test::new();
test.do_stuff(&*tq);
// just to keep everything alive while we wait
let mut input = String::new();
std::io::stdin().read_line(&mut input).unwrap();
}
cargo toml
[package]
name = "tqclient"
version = "0.1.0"
edition = "2018"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
timerqueue = { git = "https://github.com/pm100/timerqueue.git" }
parking_lot = "0.11"
There is no way to go from a &self to the Arc that self is stored in. This is because:
Rust references have additional assumptions compared to C++ references that would make such a conversion undefined behavior.
Rust's implementation of Arc does not even expose the information necessary to determine whether self is stored in an Arc or not.
Luckily, there is an alternative approach. Instead of creating a &self to the value inside the Arc, and passing that to the method, pass the Arc directly to the method that needs to access it. You can do that like this:
use std::sync::Arc;
struct Shared {
field: String,
}
impl Shared {
fn print_field(self: Arc<Self>) {
let clone: Arc<Shared> = self.clone();
println!("{}", clone.field);
}
}
Then the print_field function can only be called on an Shared encapsulated in an Arc.
having found that I needed this three times in recent days I decided to stop trying to come up with other designs. Maybe poor data design as far as rust is concerned but I needed it.
Works by changing the new function of the types using it to return an Arc rather than a raw self. All my objects are arced anyway, before they were arced by the caller, now its forced.
mini util library called mekeeper
use parking_lot::Mutex;
use std::sync::{Arc,Weak};
use std::ops::{DerefMut};
pub struct MeKeeper<T> {
them: Mutex<Weak<T>>,
}
impl<T> MeKeeper<T> {
pub fn new() -> Self {
Self {
them: Mutex::new(Weak::new()),
}
}
pub fn save(&self, arc: &Arc<T>) {
*self.them.lock().deref_mut() = Arc::downgrade(arc);
}
pub fn get(&self) -> Arc<T> {
match self.them.lock().upgrade() {
Some(arc) => return arc,
None => unreachable!(),
}
}
}
to use it
pub struct Test {
me: MeKeeper<Self>,
foo:i8,
}
impl Test {
pub fn new() -> Arc<Self> {
let arc = Arc::new(Test {
me: MeKeeper::new(),
foo:42
});
arc.me.save(&arc);
arc
}
}
now when an instance of Test wants to call a function that requires it to pass in an Arc it does:
fn nargle(){
let me = me.get();
Ooddle::fertang(me,42);// fertang needs an Arc<T>
}
the weak use is what the shared_from_this does so as to prevent refcount deadlocks, I stole that idea.
The unreachable path is safe because the only place that can call MeKeeper::get is the instance of T (Test here) that owns it and that call can only happen if the T instance is alive. Hence no none return from weak::upgrade

Spawning tasks with non-static lifetimes with tokio 0.1.x

I have a tokio core whose main task is running a websocket (client). When I receive some messages from the server, I want to execute a new task that will update some data. Below is a minimal failing example:
use tokio_core::reactor::{Core, Handle};
use futures::future::Future;
use futures::future;
struct Client {
handle: Handle,
data: usize,
}
impl Client {
fn update_data(&mut self) {
// spawn a new task that updates the data
self.handle.spawn(future::ok(()).and_then(|x| {
self.data += 1; // error here
future::ok(())
}));
}
}
fn main() {
let mut runtime = Core::new().unwrap();
let mut client = Client {
handle: runtime.handle(),
data: 0,
};
let task = future::ok::<(), ()>(()).and_then(|_| {
// under some conditions (omitted), we update the data
client.update_data();
future::ok::<(), ()>(())
});
runtime.run(task).unwrap();
}
Which produces this error:
error[E0477]: the type `futures::future::and_then::AndThen<futures::future::result_::FutureResult<(), ()>, futures::future::result_::FutureResult<(), ()>, [closure#src/main.rs:13:51: 16:10 self:&mut &mut Client]>` does not fulfill the required lifetime
--> src/main.rs:13:21
|
13 | self.handle.spawn(future::ok(()).and_then(|x| {
| ^^^^^
|
= note: type must satisfy the static lifetime
The problem is that new tasks that are spawned through a handle need to be static. The same issue is described here. Sadly it is unclear to me how I can fix the issue. Even some attempts with and Arc and a Mutex (which really shouldn't be needed for a single-threaded application), I was unsuccessful.
Since developments occur rather quickly in the tokio landscape, I am wondering what the current best solution is. Do you have any suggestions?
edit
The solution by Peter Hall works for the example above. Sadly when I built the failing example I changed tokio reactor, thinking they would be similar. Using tokio::runtime::current_thread
use futures::future;
use futures::future::Future;
use futures::stream::Stream;
use std::cell::Cell;
use std::rc::Rc;
use tokio::runtime::current_thread::{Builder, Handle};
struct Client {
handle: Handle,
data: Rc<Cell<usize>>,
}
impl Client {
fn update_data(&mut self) {
// spawn a new task that updates the data
let mut data = Rc::clone(&self.data);
self.handle.spawn(future::ok(()).and_then(move |_x| {
data.set(data.get() + 1);
future::ok(())
}));
}
}
fn main() {
// let mut runtime = Core::new().unwrap();
let mut runtime = Builder::new().build().unwrap();
let mut client = Client {
handle: runtime.handle(),
data: Rc::new(Cell::new(1)),
};
let task = future::ok::<(), ()>(()).and_then(|_| {
// under some conditions (omitted), we update the data
client.update_data();
future::ok::<(), ()>(())
});
runtime.block_on(task).unwrap();
}
I obtain:
error[E0277]: `std::rc::Rc<std::cell::Cell<usize>>` cannot be sent between threads safely
--> src/main.rs:17:21
|
17 | self.handle.spawn(future::ok(()).and_then(move |_x| {
| ^^^^^ `std::rc::Rc<std::cell::Cell<usize>>` cannot be sent between threads safely
|
= help: within `futures::future::and_then::AndThen<futures::future::result_::FutureResult<(), ()>, futures::future::result_::FutureResult<(), ()>, [closure#src/main.rs:17:51: 20:10 data:std::rc::Rc<std::cell::Cell<usize>>]>`, the trait `std::marker::Send` is not implemented for `std::rc::Rc<std::cell::Cell<usize>>`
= note: required because it appears within the type `[closure#src/main.rs:17:51: 20:10 data:std::rc::Rc<std::cell::Cell<usize>>]`
= note: required because it appears within the type `futures::future::chain::Chain<futures::future::result_::FutureResult<(), ()>, futures::future::result_::FutureResult<(), ()>, [closure#src/main.rs:17:51: 20:10 data:std::rc::Rc<std::cell::Cell<usize>>]>`
= note: required because it appears within the type `futures::future::and_then::AndThen<futures::future::result_::FutureResult<(), ()>, futures::future::result_::FutureResult<(), ()>, [closure#src/main.rs:17:51: 20:10 data:std::rc::Rc<std::cell::Cell<usize>>]>`
So it does seem like in this case I need an Arc and a Mutex even though the entire code is single-threaded?
In a single-threaded program, you don't need to use Arc; Rc is sufficient:
use std::{rc::Rc, cell::Cell};
struct Client {
handle: Handle,
data: Rc<Cell<usize>>,
}
impl Client {
fn update_data(&mut self) {
let data = Rc::clone(&self.data);
self.handle.spawn(future::ok(()).and_then(move |_x| {
data.set(data.get() + 1);
future::ok(())
}));
}
}
The point is that you no longer have to worry about the lifetime because each clone of the Rc acts as if it owns the data, rather than accessing it via a reference to self. The inner Cell (or RefCell for non-Copy types) is needed because the Rc can't be dereferenced mutably, since it has been cloned.
The spawn method of tokio::runtime::current_thread::Handle requires that the future is Send, which is what is causing the problem in the update to your question. There is an explanation (of sorts) for why this is the case in this Tokio Github issue.
You can use tokio::runtime::current_thread::spawn instead of the method of Handle, which will always run the future in the current thread, and does not require that the future is Send. You can replace self.handle.spawn in the code above and it will work just fine.
If you need to use the method on Handle then you will also need to resort to Arc and Mutex (or RwLock) in order to satisfy the Send requirement:
use std::sync::{Mutex, Arc};
struct Client {
handle: Handle,
data: Arc<Mutex<usize>>,
}
impl Client {
fn update_data(&mut self) {
let data = Arc::clone(&self.data);
self.handle.spawn(future::ok(()).and_then(move |_x| {
*data.lock().unwrap() += 1;
future::ok(())
}));
}
}
If your data is really a usize, you could also use AtomicUsize instead of Mutex<usize>, but I personally find it just as unwieldy to work with.

How do I have internal state for a custom logger which only takes &self?

I'm trying to implement a simple logger by implementing the log crate.
The logger should behave like this:
[1] First log message
[2] Second log message
[3] Third log message
To implement this, I have my logger struct
struct SeqLogger {
seq: i64,
}
and implement the Log trait's
fn enabled(&self, metadata: &Metadata) -> bool
fn log(&self, record: &Record)
fn flush(&self)
In log(&self, record: &Record) implementation, I would do
fn log(&self, record: &Record) {
println!("[{}] {}", self.seq, record.args());
self.seq = self.seq + 1;
}
However, the compiler complains that self is not mutable. Am I working in a right way to implement this? How can I update the state of the logger without &mut self?
It seems that the logger crate doesn't intend for loggers to have any internal state, so it forces them to be shared as immutable. This easies things a lot, in fact, since a logger should usually be shared between threads and used simultaneously, and that's not possible with & mut self.
However, there's a usual workaround: interior mutability. There's a type std::cell::Cell designed exactly for that use case: to have a immutable reference to something that should be mutable. Your internal state is simply an integer, so it's Copy, and we can just try to use Cell as-is:
extern crate log; // 0.4.5
use log::*;
use std::cell::Cell;
struct SeqLogger {
seq: Cell<i64>,
}
impl Log for SeqLogger {
fn log(&self, record: &Record) {
println!("[{}] {}", self.seq.get(), record.args());
self.seq.set(self.seq.get() + 1);
}
fn enabled(&self, metadata: &Metadata) -> bool { if false {true} else {unimplemented!()} }
fn flush(&self) { unimplemented!(); }
}
However, the compiler immediately becomes angry again:
error[E0277]: `std::cell::Cell<i64>` cannot be shared between threads safely
--> src/lib.rs:9:6
|
9 | impl Log for SeqLogger {
| ^^^ `std::cell::Cell<i64>` cannot be shared between threads safely
|
= help: within `SeqLogger`, the trait `std::marker::Sync` is not implemented for `std::cell::Cell<i64>`
= note: required because it appears within the type `SeqLogger`
This makes sence, since, as I said before, the logger itself must be Sync, so we must guarantee that it's safe to share its contents too. At the same time, Cell is not Sync - exactly because of the interior mutability we're using here. Again, there's a usual way to fix it - Mutex:
extern crate log; // 0.4.5
use log::*;
use std::cell::Cell;
use std::sync::Mutex;
struct SeqLogger {
seq: Mutex<Cell<i64>>,
}
impl Log for SeqLogger {
fn log(&self, record: &Record) {
let seq = self.seq.lock().unwrap(); // perhaps replace this with match in production
println!("[{}] {}", seq.get(), record.args());
seq.set(seq.get() + 1);
}
fn enabled(&self, metadata: &Metadata) -> bool { if false {true} else {unimplemented!()} }
fn flush(&self) { unimplemented!(); }
}
Now it compiles just fine.
Playground with the last variant
EDIT: According to comments, we can strip one layer of indirection, since Mutex grants us both the internal mutability (sort of) and the Syncability. So we can remove the Cell and dreference the MutexGuard directly:
// --snip--
fn log(&self, record: &Record) {
let mut seq = self.seq.lock().unwrap(); // perhaps replace this with match in production
println!("[{}] {}", *seq, record.args());
*seq = *seq + 1;
}
// --snip--
And furthermore, since our state is just an integer, we can use a standard atomic type instead of Mutex. Note that AtomicI64 is unstable, so you might want to use AtomicIsize or AtomicUsize instead:
use std::sync::atomic::{AtomicIsize, Ordering};
struct SeqLogger {
seq: AtomicIsize,
}
impl Log for SeqLogger {
fn log(&self, record: &Record) {
let id = self.seq.fetch_add(1, Ordering::SeqCst);
println!("[{}] {}", id, record.args());
}
// --snip--
}
Playground

Resources