Rails 3 uninitialized constant when using thread - multithreading

i am running rails 3 thread to improve the performance of the rake task. sometimes i am getting uninitialized constant when i am trying to access outside class from my current class. as an example when i am trying to access NotificationService class from NotificationApp as shown below then getting error as uninitialized constant NotificationService. This error is not coming every time. sometimes rake task is running fine without the error and sometimes same rake is failing with uninitialized constant. what could be the reason for this and how can i fix this issue?
class NotificationApp < ActiveRecord::Base
def signal_event
NotificationService.notify
end
end
Edit
This is how i am creating threadpool
class ThreadPool
def initialize(size)
#size = size
#jobs = Queue.new
#pool = Array.new(#size) do |i|
Thread.new do
Thread.current[:id] = i
catch(:exit) do
loop do
job, args = #jobs.pop
job.call(*args)
end
end
end
end
end
# add a job to queue
def schedule(*args, &block)
#jobs << [block, args]
end
# run threads and perform jobs from queue
def run!
#size.times do
schedule { throw :exit }
end
#pool.map(&:join)
end
# get all threads created
def threads
#pool
end
def complete!
#pool.each{|t| t.kill}
end
end
and this is how i am using threads
def perform
ReadReplicaHelper.read_from_slave do
pool = ThreadPool.new(CONNECTION_POOL_COUNT - 1)
device_ids.in_groups_of(1000, false) do |devices|
devices.each do |device_id|
device = Device.find_by_id(device_id)
if device
ReadReplicaHelper.read_from_master do
Notification.where(device_id: device_id).each do |notification|
pool.schedule do
pool_method notification, last_status, before_last_status
end
end
end
end
end
end
# Run the Thread Pool
pool.run!
# Kill the threads created in the pool
pool.complete!
end

Make sure your NotificationService class is in a file named app/services/notification_service.rb and that you are loading this services directory.
Also as you are using threads, make sure to join all the threads after calling them

Related

How can await async uses 2 thread?

As far as I know, programs using await async uses only 1 thread. That means we do not have to worry about any conflicting threads. When the compiler see "await" it will simply do other things on that same thread till what they're awaiting for is done.
I mean the stuff we're awaiting for may run in another thread. However, the program doesn't create another thread. It simply do something else in that same thread.
Hence, we shouldn't worry about conflicts
Yet, today I discover that something is running on at least 2 different thread
Public Sub LogEvents(ByVal whatToLog As String, Optional ByVal canRandom As Boolean = True)
Static logNumber As Integer
Dim timeStamp As String
timeStamp = CStr(Microsoft.VisualBasic.Now)
whatToLog = timeStamp & " " & " " & whatToLog & Microsoft.VisualBasic.vbNewLine
Try
Debug.Print(whatToLog)
System.IO.File.AppendAllText("log.txt", whatToLog, defaultEncoding)
...
Looking at the thread.
So one is worker thread
And another is main thread
Both threads stuck at the same place.
What confuses me is I thought everything should have been running on main thread. That's just how await async works. How can anything run on worker thread?
The task is created like this
For Each account In uniqueAccounts().Values
Dim newtask = account.getMarketInfoAsync().ContinueWith(Sub() account.LogFinishTask("Geting Market Info", starttime))
LogEvents("marketinfo account " + account.Exchange + " is being done by task " + newtask.Id.ToString + " " + newtask.ToString)
tasklist.Add(newtask)
'newtask.ContinueWith(Sub() LogEvents(account.ToString))
Next
This is the screen shot
That is followed by
LogEvents("Really Start Getting Market Detail of All")
Try
Await jsonHelper.whenAllWithTimeout(tasklist.ToArray, 500000)
Catch ex As Exception
Dim b = 1
End Try
That calls
Public Shared Async Function whenAllWithTimeout(taskar As Task(), timeout As Integer) As Task
Dim timeoutTask = Task.Delay(timeout)
Dim maintask = Task.WhenAll(taskar)
Await Task.WhenAny({timeoutTask, maintask})
If maintask.IsCompleted Then
Dim b = 1
For Each tsk In taskar
LogEvents("Not Time Out. Status of task " + tsk.Id.ToString + " is " + tsk.IsCompleted.ToString)
Next
End If
If timeoutTask.IsCompleted Then
Dim b = 1
For Each tsk In taskar
LogEvents("status of task " + tsk.Id.ToString + " is " + tsk.IsCompleted.ToString)
Next
End If
End Function
So I created a bunch of tasks and I use Task.Whenall and Task.Whenany
Is that why they run it on a different thread than the main thread?
How do I make it run on main thread only?
As far as I know, programs using await async uses only 1 thread.
This is incorrect.
When the compiler see "await" it will simply do other things on that same thread till what they're awaiting for is done.
Also incorrect.
I recommend reading my async intro.
await actually causes a return from the method. The thread may or may not be returned to the runtime.
How can anything run on worker thread?
When async methods resume executing after an await, by default they will resume executing on a context captured by that await. If there was no context (common in console applications), then they resume on a thread pool thread.
How do I make it run on main thread only?
Give them a single-threaded context. GUI main threads use a single-threaded context, so you could run this on a GUI main thread. Or if you are writing a console application, you can use AsyncContext from my AsyncEx library.

Multiprocess : Persistent Pool?

I have code like the one below :
def expensive(self,c,v):
.....
def inner_loop(self,c,collector):
self.db.query('SELECT ...',(c,))
for v in self.db.cursor.fetchall() :
collector.append( self.expensive(c,v) )
def method(self):
# create a Pool
#join the Pool ??
self.db.query('SELECT ...')
for c in self.db.cursor.fetchall() :
collector = []
#RUN the whole cycle in parallel in separate processes
self.inner_loop(c, collector)
#do stuff with the collector
#! close the pool ?
both the Outer and the Inner loop are thousands of steps ...
I think I understand how to run a Pool of couple of processes.
All the examples I found show that more or less.
But in my case I need to lunch a persistent Pool and then feed the data (c-value). Once a inner-loop process has finished I have to supply the next-available-c-value.
And keep the processes running and collect the results.
How do I do that ?
A clunky idea I have is :
def method(self):
ws = 4
with Pool(processes=ws) as pool :
cs = []
for i,c in enumerate(..) :
cs.append(c)
if i % ws == 0 :
res = [pool.apply(self.inner_loop, (c)) for i in range(ws)]
cs = []
collector.append(res)
will this keep the same pool running !! i.e. not lunch new process every time ?i
Do I need 'if i % ws == 0' part or I can use imap(), map_async() and the Pool obj will block the loop when available workers are exhausted and continue when some are freed ?
Yes, the way that multiprocessing.Pool works is:
Worker processes within a Pool typically live for the complete duration of the Pool’s work queue.
So simply submitting all your work to the pool via imap should be sufficient:
with Pool(processes=4) as pool:
initial_results = db.fetchall("SELECT c FROM outer")
results = [pool.imap(self.inner_loop, (c,)) for c in initial_results]
That said, if you really are doing this to fetch things from the DB, it may make more sense to move more processing down into that layer (bring the computation to the data rather than bringing the data to the computation).

Torch - Multithreading to load tensors into a queue for training purposes

I would like to use the library threads (or perhaps parallel) for loading/preprocessing data into a queue but I am not entirely sure how it works. In summary;
Load data (tensors), pre-process tensors (this takes time, hence why I am here) and put them in a queue. I would like to have as many threads as possible doing this so that the model is not waiting or not waiting for long.
For the tensor at the top of the queue, extract it and forward it through the model and remove it from the queue.
I don't really understand the example in https://github.com/torch/threads enough. A hint or example as to where I would load data into the queue and train would be great.
EDIT 14/03/2016
In this example "https://github.com/torch/threads/blob/master/test/test-low-level.lua" using a low level thread, does anyone know how I can extract data from these threads into the main thread?
Look at this multi-threaded data provider:
https://github.com/soumith/dcgan.torch/blob/master/data/data.lua
It runs this file in the thread:
https://github.com/soumith/dcgan.torch/blob/master/data/data.lua#L18
by calling it here:
https://github.com/soumith/dcgan.torch/blob/master/data/data.lua#L30-L43
And afterwards, if you want to queue a job into the thread, you provide two functions:
https://github.com/soumith/dcgan.torch/blob/master/data/data.lua#L84
The first one runs inside the thread, and the second one runs in the main thread after the first one completes.
Hopefully that makes it a bit more clear.
If Soumith's examples in the previous answer are not very easy to use, I suggest you build your own pipeline from scratch. I provide here an example of two synchronized threads : one for writing data and one for reading data:
local t = require 'threads'
t.Threads.serialization('threads.sharedserialize')
local tds = require 'tds'
local dict = tds.Hash() -- only local variables work here, and only tables or tds.Hash()
dict[1] = torch.zeros(4)
local m1 = t.Mutex()
local m2 = t.Mutex()
local m1id = m1:id()
local m2id = m2:id()
m1:lock()
local pool = t.Threads(
1,
function(threadIdx)
end
)
pool:addjob(
function()
local t = require 'threads'
local m1 = t.Mutex(m1id)
local m2 = t.Mutex(m2id)
while true do
m2:lock()
dict[1] = torch.randn(4)
m1:unlock()
print ('W ===> ')
print(dict[1])
collectgarbage()
collectgarbage()
end
return __threadid
end,
function(id)
end
)
-- Code executing on master:
local a = 1
while true do
m1:lock()
a = dict[1]
m2:unlock()
print('R --> ')
print(a)
end

Python - queuing one function

I've just started learning python, but I have problem with my code:
import pifacecad
# listener initialization
cad = pifacecad.PiFaceCAD()
listener = pifacecad.SwitchEventListener(chip=cad)
listener.register(4, pifacecad.IODIR_ON, blowMyMind)
listener.activate()
def blowMyMind(event):
print('some prints...')
time.sleep(4)
print('and the end.')
blowMyMind() will be fired as many times as listener it tells to. That is okay.
My goal is to deactivate listener UNTIL blowMyMind ends. Pifacecad suggest Barrier() to achieve that, at least I think that it was here for that reason(correct me if I'm wrong).
Now it's working as many times as I activate listener event, but It's not like pushing function 99 times at once, but queues it and runs one by one.
With Barriers I think it should look like this:
# Barrier
global end_barrier
end_barrier = Barrier(1)
# listener initialization
listener = pifacecad.SwitchEventListener(chip=cad)
listener.register(4, pifacecad.IODIR_ON, blowMyMind)
listener.activate()
def blowMyMind(event):
global end_barrier
test = end_barrier.wait()
print(test) # returns 0, which should not in about 5 seconds
print('some prints...')
time.sleep(4)
print('and the end.')
The funny part is when I change parties in Barrier initialization it is causing BrokenBarrierError at first listener event.
Actually I think that I completely misunderstood Barrier() I think the problem with it is that all listener events are in one thread instead of their own threads.
It's making me even more confused when I'm reading:
parties The number of threads required to pass the barrier.
from here: https://docs.python.org/3/library/threading.html
My conclusion: when initializing Barrier(X) it would be realeased when there will be X('or less', 'or more'?) number of threads. That sounds VERY stupid :D
I tried to make it that way with no luck:
# listener initialization
global busy
busy = 0
cad = pifacecad.PiFaceCAD()
listener = pifacecad.SwitchEventListener(chip=cad)
listener.register(4, pifacecad.IODIR_ON, blowMyMind)
listener.activate()
def blowMyMind(event):
global busy
if busy == 0:
busy = 1
print('some prints...')
time.sleep(4)
print('and the end.')
busy = 0
else:
return None

Ruby XMPP4R bot and Threads - trouble

I want my bot sends and receives messages in parallel threads. I also want my bot sends message back to user when receives any message from user. But now he sends it back to user every 5 seconds. I understand that it's because i used "loop do" but without infinity loop i cant use callbacks.
So how to send and receive messages in parallel threads? And how to overcome my "loop problem" when receiving messages?
require 'xmpp4r'
class Bot
include Jabber
def initialize jid,jpassword
#jid = jid
#jpassword = jpassword
#client = Client.new(JID::new(#jid))
#client.connect
#client.auth(#jpassword)
#client.send(Presence.new.set_type(:available))
end
def wait4msg
loop do
#client.add_message_callback do |msg|
send_message(msg.from,msg.body)
sleep 5
end
end
end
def send_message to,message
msg = Message::new(to,message)
msg.type = :chat
#client.send(msg)
end
def add_user jid
adding = Presence.new.set_type(:subscribe).set_to(jid)
#client.send(adding)
end
end
bot = Bot.new('from#example.xmpp','123456')
t1 = Thread.new do
bot.wait4msg
end
t2 = Thread.new do
bot.send_message('to#example.xmpp',Random.new.rand(100).to_s)
end
Thread.list.each { |t| t.join if t != Thread.main }
Good day. You can use callbacks without loop, see an examples. For example: in initialize add
#client.add_message_callback do |m|
if m.type != :error
m2 = Message.new(m.from, "You sent: #{m.body}")
m2.type = m.type
#client.send(m2)
end
end

Resources