I'm new to Rust so please bare with me.
I'm trying to do what I would consider to be quite simple - insert an item into a HashMap. When I try to insert an item, it just freezes and I can't seem to figure out why.
VSCode isn't picking up any errors, been googling around and can't seem to find anything that might indicate what might be the issue.
I feel like that it might be related the the Mutex but really not sure. Below is a snippet of the relevant bit of the code.
use lazy_static::lazy_static;
use std::collections::{HashMap, HashSet};
use std::net::TcpStream;
use std::sync::Mutex;
lazy_static! {
static ref SUBSCRIPTIONS: Mutex<HashMap<String, HashSet<String>>> = Mutex::new(HashMap::new());
}
pub struct Subscription {}
impl Subscription {
fn is_channel_registered(&self, channel: &String) -> bool {
SUBSCRIPTIONS.lock().unwrap().contains_key(channel)
}
pub fn add_subscription(&self, client: &TcpStream, channel: &String) {
let mut subscriptions = SUBSCRIPTIONS.lock().unwrap();
// Check if the a key for the channel already exists. If not create it.
if !self.is_channel_registered(&channel) {
subscriptions.insert(channel.to_string(), HashSet::new());
}
}
Now, I am trying to test this using a unittest.
Below is that chunk of code:
#[cfg(test)]
mod subscription_tests {
use super::*;
use std::net::TcpStream;
#[test]
fn test_add_subscription() {
let client = TcpStream::connect("localhost:8080").unwrap();
let channel: "test_channel".to_string();
Subscription {}.add_subscription(&client, &channel);
}
}
I've commented out a bunch of code to try and find the issue by process of elimination, but it really looks like the issue is the actual code in the initial block.
Welp! I'm hoping this is enough to reproduce, but just incase here is the code.
is_channel_registered() tries to lock the mutex while it's already locked. That results in a deadlock.
Either pass the unlocked value explicitly to it:
impl Subscription {
fn is_channel_registered(subscriptions: &HashMap<String, HashSet<String>>, channel: &String) -> bool {
subscriptions.contains_key(channel)
}
pub fn add_subscription(&self, client: &TcpStream, channel: &String) {
let mut subscriptions = SUBSCRIPTIONS.lock().unwrap();
// Check if the a key for the channel already exists. If not create it.
if !Self::is_channel_registered(&*subscriptions, &channel) {
subscriptions.insert(channel.to_string(), HashSet::new());
}
}
}
Or lock it inside add_subscription() only after is_channel_registered() is used:
impl Subscription {
fn is_channel_registered(&self, channel: &String) -> bool {
SUBSCRIPTIONS.lock().unwrap().contains_key(channel)
}
pub fn add_subscription(&self, client: &TcpStream, channel: &String) {
// Check if the a key for the channel already exists. If not create it.
if !self.is_channel_registered(&channel) {
SUBSCRIPTIONS.lock().unwrap().insert(channel.to_string(), HashSet::new());
}
}
Related
I am having a hard time figuring out how to sort out this issue.
So I have a class ArcWorker holding a shared reference to Worker (as you can remark below).
I wrote a function in ArcWorker called join() in which the line self.internal.lock().unwrap().join(); fails with the following error:
cannot move out of dereference of std::sync::MutexGuard<'_, models::worker::Worker>
What I attempt through that line is to lock the mutex, unwrap and call the join() function from the Worker class.
As far as I understand, once that the lock function is called and it borrows a reference to self (&self), then I need some way to get to pass self by value to join (std::thread's join function requires passing self by value).
What can I do to make this work? Tried to find an answer to my question for hours but to no avail.
pub struct Worker {
accounts: Vec<Arc<Mutex<Account>>>,
thread_join_handle: Option<thread::JoinHandle<()>>
}
pub struct ArcWorker {
internal: Arc<Mutex<Worker>>
}
impl ArcWorker {
pub fn new(accounts: Vec<Arc<Mutex<Account>>>) -> ArcWorker {
return ArcWorker {
internal: Arc::new(Mutex::new(Worker {
accounts: accounts,
thread_join_handle: None
}))
}
}
pub fn spawn(&self) {
let local_self_1 = self.internal.clone();
self.internal.lock().unwrap().thread_join_handle = Some(thread::spawn(move || {
println!("Spawn worker");
local_self_1.lock().unwrap().perform_random_transactions();
}));
}
pub fn join(&self) {
self.internal.lock().unwrap().join();
}
}
impl Worker {
fn join(self) {
if let Some(thread_join_handle) = self.thread_join_handle {
thread_join_handle.join().expect("Couldn't join the associated threads.")
}
}
fn perform_random_transactions(&self) {
}
}
Since you already hold JoinHandle in an option, you can make Worker::join() take &mut self instead of self and change the if let condition to:
// note added `.take()`
if let Some(thread_join_handle) = self.thread_join_handle.take() {
Option::take() will move the handle out of the option and give you ownership over it, while leaving None in self.thread_join_handle. With this change ArcWorker::join() should compile as-is.
I have a Terminal and a TerminalFactory type. What I want to achieve is a factory pattern or at least an implementation that is similar to it in regards to some key aspects.
In particular, I want every Terminal to have specific properties (e.G. id) that are set by TerminalFactory. In GRASP terms, TerminalFactory is the Creator and the Information Expert because it knows next_id.
TerminalFactory is not a singleton, it is an instance that will be injected in a container that will be handed around where necessary (I'm trying to implement an application based on the composite pattern to achieve the OCP of SOLID).
It looks like everything is in line with the fact that Rust discourages static state unless it's unsafe code.
The problem now is that instances of Terminal have an id which should not be set by arbitrary code. Therefore, it's not pub. On the other hand, it can be gotten. So I added a public getter, like this:
pub struct Terminal {
id: u32, // private
pub name: String
}
impl Terminal {
pub fn get_id(self: &Self) -> u32 {
self.id
}
}
As far as I can tell, there is no friend keyword in rust, no static state I can keep in the Terminal type and TerminalFactory cannot set Terminal::id during creation because it's not public.
I have the impression I can't implement the factory pattern correctly. Maybe I shouldn't even try to transfer "classical patterns" to Rust? Then what about the SOLID principles which are absolutely necessary to manage change in medium-scale applications (say, 50+ types and beyond 10000 lines of code)?
By adding an associate function new(id : u32) to Terminal I completely defeat the purpose of the factory. So this won't make sense, either.
As all of this code will be in a library that is consumed by other Rust code, how can I protect the consumer code from creating broken instances?
Put both types in the same module:
mod terminal {
pub struct Terminal {
id: u32,
pub name: String
}
impl Terminal {
pub fn get_id(&self) -> u32 {
self.id
}
}
pub struct TerminalFactory {
next_id: u32,
}
impl TerminalFactory {
pub fn new() -> Self {
Self {
next_id: 0,
}
}
pub fn new_terminal(&mut self, name: &str) -> Terminal {
let next_id = self.next_id;
self.next_id += 1;
Terminal {
id: next_id,
name: name.to_owned(),
}
}
}
}
Then this is OK:
let mut tf = terminal::TerminalFactory::new();
let t0 = tf.new_terminal("foo");
let t1 = tf.new_terminal("bar");
println!("{} {}", t0.get_id(), t0.name);
println!("{} {}", t1.get_id(), t1.name);
But this is not:
println!("{} {}", t0.id, t0.name); // error: private field
See Visibility and privacy for other scenarios.
I've an Arc<Mutex<Thing>> field in a struct which is cloned many times. It is shared between concurrent threads. Drop::drop is called for each clone as it goes out of scope. Is there any way to determine when Drop::drop is called for the last (unique) Arc<Mutex<Thing>>?
It's clear that strong_count is subject to data races (I've seen them). So, you can't count on Arc::strong_count() == 1 (no pun intended).
I found that I couldn't use Arc::try_unwrap() due to a move issue.
Arc::is_unique() is private.
Other than keeping a Arc<AtomicUsize> field, which is incremented on clone and decremented on drop, is there any way to determine if a drop is for a unique Arc<Mutex<Thing>>?
Here's an MRE:
use std::sync::{Arc};
#[derive(Debug)]
enum Action {
One, Two, Three
}
// Thing trait which operates on an Action, which should be a enum, allowing for
// different action sets.
trait Thing<T> {
fn disconnected(&self);
fn action(&self, action: T);
}
// There are many instances of an ActionController.
// There may be zero or more clones of an instance.
// The final drop of the instances should call thing.disconnected()
// In a multi-core environment, the same instance may be running on multiple cores
// ActionController should not be generic.
#[derive(Clone)]
struct ActionController {
id: usize,
thing: Arc<dyn Thing<Action>>,
}
impl ActionController {
fn new(id: usize, thing: Box<dyn Thing<Action>>) -> Self {
Self { id, thing: Arc::from(thing) }
}
fn invoke(&self, action: Action) {
self.thing.action(action);
}
}
//
// To work around the drop issue, I've implemented Clone for ActionController which
// performs a fetch_add(1) on clone and a fetch_sub(1) on drop. This provides
// suficient information to call disconnected() -- but it just seems like there's
// got to be a better way.
impl Drop for ActionController {
fn drop(&mut self) {
// drop will be called for each clone of an Controller instance. When
// the unique instance is dropped, disconnected() must be called
self.thing.disconnected();
}
}
struct Controlled {}
impl Thing<Action> for Controlled {
fn disconnected(&self) { println!("disconnected")}
fn action(&self, action: Action) {println!("action: {:#?}", action)}
}
fn bad() {
let controlled = Controlled{};
let controlled = Box::new(controlled) as Box<dyn Thing<Action>>;
let controller = ActionController::new(1, controlled);
let clone = controller.clone();
controller.invoke(Action::One);
clone.invoke(Action::Two);
drop (controller);
clone.invoke(Action::Three);
}
fn main() {
bad();
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn incorrect() {
bad();
}
}
Arc::try_unwrap is probably the intended way to do this - is it possible to restructure your code to avoid the move issues you were running into?
Why do you want to know? If you have some extra cleanup code that needs to be executed before the Mutex<Thing> is dropped, maybe you could use an Arc<MyLockedThing> instead, where MyLockedThing is a struct containing a Mutex<Thing> that impls Drop to do the cleanup?
It seems like you want to be notified when the data inside the Arc is to be dropped. If so, this can be done by implementing Drop on the type "inside" the Arc.
Define a newtype:
struct ThingAction(Box<dyn Thing<Action>>);
impl Thing<Action> for ThingAction {
fn disconnected(&self) {
self.0.disconnected()
}
fn action(&self, action: Action) {
self.0.action(action)
}
}
And implement Drop:
impl Drop for ThingAction {
fn drop(&mut self) {
self.disconnected()
}
}
Then use the newtype:
#[derive(Clone)]
struct ActionController {
id: usize,
thing: Arc<ThingAction>,
}
impl ActionController {
fn new(id: usize, thing: Box<dyn Thing<Action>>) -> Self {
Self { id, thing: Arc::new(ThingAction(thing)) }
}
I don't think there's any perfect way to do this without stdlib support (go checkout out Arc::drop).
Weak::strong_count or Weak::upgrade is less subject to races so if you downgrade your Arc then drop it, if the weakref's strong count is 0 or trying to upgrade it fails you know the Arc is dead, but there is no guarantee the current thread killed it, two might have concurrently dropped the Arc at the same time before either had the time to check for the weakref's strong count.
I think the only bulletproof way would be to get notified by a Drop stored inside the Arc, that you're guaranteed is only called once.
I have an object that I know that is inside an Arc because all the instances are always Arced. I would like to be able to pass a cloned Arc of myself in a function call. The thing I am calling will call me back later on other threads.
In C++, there is a standard mixin called enable_shared_from_this. It enables me to do exactly this
class Bus : public std::enable_shared_from_this<Bus>
{
....
void SetupDevice(Device device,...)
{
device->Attach(shared_from_this());
}
}
If this object is not under shared_ptr management (the closest C++ has to Arc) then this will fail at run time.
I cannot find an equivalent.
EDIT:
Here is an example of why its needed. I have a timerqueue library. It allows a client to request an arbitrary closure to be run at some point in the future. The code is run on a dedicated thread. To use it you must pass a closure of the function you want to be executed later.
use std::time::{Duration, Instant};
use timerqueue::*;
use parking_lot::Mutex;
use std::sync::{Arc,Weak};
use std::ops::{DerefMut};
// inline me keeper cos not on github
pub struct MeKeeper<T> {
them: Mutex<Weak<T>>,
}
impl<T> MeKeeper<T> {
pub fn new() -> Self {
Self {
them: Mutex::new(Weak::new()),
}
}
pub fn save(&self, arc: &Arc<T>) {
*self.them.lock().deref_mut() = Arc::downgrade(arc);
}
pub fn get(&self) -> Arc<T> {
match self.them.lock().upgrade() {
Some(arc) => return arc,
None => unreachable!(),
}
}
}
// -----------------------------------
struct Test {
data:String,
me: MeKeeper<Self>,
}
impl Test {
pub fn new() -> Arc<Test>{
let arc = Arc::new(Self {
me: MeKeeper::new(),
data: "Yo".to_string()
});
arc.me.save(&arc);
arc
}
fn task(&self) {
println!("{}", self.data);
}
// in real use case the TQ and a ton of other status data is passed in the new call for Test
// to keep things simple here the 'container' passes tq as an arg
pub fn do_stuff(&self, tq: &TimerQueue) {
// stuff includes a async task that must be done in 1 second
//.....
let me = self.me.get().clone();
tq.queue(
Box::new(move || me.task()),
"x".to_string(),
Instant::now() + Duration::from_millis(1000),
);
}
}
fn main() {
// in real case (PDP11 emulator) there is a Bus class owning tons of objects thats
// alive for the whole duration
let tq = Arc::new(TimerQueue::new());
let test = Test::new();
test.do_stuff(&*tq);
// just to keep everything alive while we wait
let mut input = String::new();
std::io::stdin().read_line(&mut input).unwrap();
}
cargo toml
[package]
name = "tqclient"
version = "0.1.0"
edition = "2018"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
timerqueue = { git = "https://github.com/pm100/timerqueue.git" }
parking_lot = "0.11"
There is no way to go from a &self to the Arc that self is stored in. This is because:
Rust references have additional assumptions compared to C++ references that would make such a conversion undefined behavior.
Rust's implementation of Arc does not even expose the information necessary to determine whether self is stored in an Arc or not.
Luckily, there is an alternative approach. Instead of creating a &self to the value inside the Arc, and passing that to the method, pass the Arc directly to the method that needs to access it. You can do that like this:
use std::sync::Arc;
struct Shared {
field: String,
}
impl Shared {
fn print_field(self: Arc<Self>) {
let clone: Arc<Shared> = self.clone();
println!("{}", clone.field);
}
}
Then the print_field function can only be called on an Shared encapsulated in an Arc.
having found that I needed this three times in recent days I decided to stop trying to come up with other designs. Maybe poor data design as far as rust is concerned but I needed it.
Works by changing the new function of the types using it to return an Arc rather than a raw self. All my objects are arced anyway, before they were arced by the caller, now its forced.
mini util library called mekeeper
use parking_lot::Mutex;
use std::sync::{Arc,Weak};
use std::ops::{DerefMut};
pub struct MeKeeper<T> {
them: Mutex<Weak<T>>,
}
impl<T> MeKeeper<T> {
pub fn new() -> Self {
Self {
them: Mutex::new(Weak::new()),
}
}
pub fn save(&self, arc: &Arc<T>) {
*self.them.lock().deref_mut() = Arc::downgrade(arc);
}
pub fn get(&self) -> Arc<T> {
match self.them.lock().upgrade() {
Some(arc) => return arc,
None => unreachable!(),
}
}
}
to use it
pub struct Test {
me: MeKeeper<Self>,
foo:i8,
}
impl Test {
pub fn new() -> Arc<Self> {
let arc = Arc::new(Test {
me: MeKeeper::new(),
foo:42
});
arc.me.save(&arc);
arc
}
}
now when an instance of Test wants to call a function that requires it to pass in an Arc it does:
fn nargle(){
let me = me.get();
Ooddle::fertang(me,42);// fertang needs an Arc<T>
}
the weak use is what the shared_from_this does so as to prevent refcount deadlocks, I stole that idea.
The unreachable path is safe because the only place that can call MeKeeper::get is the instance of T (Test here) that owns it and that call can only happen if the T instance is alive. Hence no none return from weak::upgrade
When creating a struct in Rust it seems like it's difficult to create one without having all of the fields set. For example with the following code
struct Connection {
url: String,
stream: TcpStream
}
You aren't able to set url without giving stream as well.
// Compilation error asking for 'stream'
let m = Connection { url: "www.google.com".to_string() };
How are you able to create these references that might be Option<None> until a later time?
The best I have found is using the Default trait, but I'd rather not have to create the TcpStream until a later time than when the struct is initialised. Am I able to do this with something like a Box?
One thing you can do is to wrap the TcpStream in an Option, i.e. Option<TcpStream>. When you first construct the struct, it'll be None, and when you initialize it you make it self.stream = Some(<initialize tcp stream>). Wherever you use the TCPStream, you'll have to check if it's Some, i.e. if it has already been initialized. If you can guarantee your behavior then you can just unwrap(), but it's probably better to make a check anyways.
struct Connection {
url: String,
stream: Option<TcpStream>
}
impl Connection {
pub fn new() -> Connection {
Connection {
url: "www.google.com".to_string(),
stream: None,
}
}
pub fn initialize_stream(&mut self) {
self.stream = Some(TcpStream::connect("127.0.0.1:34254").unwrap());
}
pub fn method_that_uses_stream(&self) {
if let Some(ref stream) = self.stream {
// can use the stream here
} else {
println!("the stream hasn't been initialized yet");
}
}
}
This is similar to what is done in Swift, in case you're familiar with that language.
All fields indeed have to be initialized when creating the struct instance (there is no null in Rust) so all the memory is allocated.
There is often a dedicated method (like new) that sets default values for fields which are supposed to be modified at a later stage.
I'd use the Box when you don't know the size of the field (like Vec does).
As an extension to Jorge Israel Peña's answer, you can use a builder. The builder has all the optional fields and produces the final value without Options:
use std::net::TcpStream;
struct ConnectionBuilder {
url: String,
stream: Option<TcpStream>,
}
impl ConnectionBuilder {
fn new(url: impl Into<String>) -> Self {
Self {
url: url.into(),
stream: None,
}
}
fn stream(mut self, stream: TcpStream) -> Self {
self.stream = Some(stream);
self
}
fn build(self) -> Connection {
let url = self.url;
let stream = self
.stream
.expect("Perform actual error handling or default value");
Connection { url, stream }
}
}
struct Connection {
url: String,
stream: TcpStream,
}
impl Connection {
fn method_that_uses_stream(&self) {
// can use self.stream here
}
}
This means that you don't have to litter your code with checks to see if the stream has been set yet.
See also:
How to initialize a struct with a series of arguments
Do Rust builder patterns have to use redundant struct code?
Is it possible to create a macro to implement builder pattern methods?
How to write an idiomatic build pattern with chained method calls in Rust?