How to implement periodic task in Hammerspoon initialization? - multithreading

I thought it would be great to be notified when my network connectivity dies or is revived, so I put this in my Hammerspoon init.lua:
ping = nil
previousStatus = nil
function pingCallback(server, eventType, ...)
hs.alert.show(eventType)
if eventType == "receivedPacket"
then
newStatus = "success"
else
if eventType == "didFail" or eventType == "sendPacketFailed"
then
newStatus = "failure"
end
end
if not (newStatus == previousStatus)
then
hs.alert.show(string.format("Network status changed to %s", newStatus))
previousStatus = newStatus
end
end
while(true)
do
ping = hs.network.ping.ping("google.com", 5, 1.0, 2.0, "any", pingCallback)
os.execute("sleep 15")
end
The problem is the sleep. It sleeps Hammerspoon itself, making it hang. What I really need is a thread or timer, or maybe to start a different OS process. What should I do?

Hammerspoon has a timer. You might replace your while loop with something like:
function pingGoogle()
hs.network.ping.ping("google.com", 5, 1.0, 2.0, "any", pingCallback)
end
googlePinger = hs.timer.new(15, pingGoogle)
googlePinger:start()
Some other things to consider:
Start/stop the ping timer based on Network Reachability. No need to poll with a ping if the route to the ping target doesn't exist.
HTTP GET to endpoints used by Big Vendors for their own network connectivity checks.
Google: http://clients3.google.com/generate_204 - should have a 204 response code (and I see the checkers that use this also check for a response content length of 0.)
Microsoft: http://www.msftncsi.com/ncsi.txt - should have a 200 response and the response body should read Microsoft NCSI.

Related

Pause/Delay sending of new batch of users from swarm

I have a test case where I need to spawn 1000 websocket connections and sustain a conversation over them through a Locust task (It has a prefedined send/receive process for the websocket connections). I can successfully do it by the following setup in locust:
Max Number of Users: 1000
Hatch rate: 1000
However, this setup opens up 1000 connection every second. Even if i lower down the hatch rate, it will come to a time where it will continue to spawn 1000 websocket connections per second. Is there a way to spawn 1000 users instantly and halt/delay the swarm in sending new 1000 connections for quite some time?
I am trying to test if a my server can handle 1000 users sending and receiving messages from my server through a websocket connection. I have tried multiprocessing approach in python but I'm having a hard time to spawn connections as fast as I can with Locust.
class UserBehavior(TaskSet):
statements = [
"Do you like coffee?",
"What's your favorite book?",
"Do you invest in crypto?",
"Who will host the Superbowl next year?",
"Have you listened to the new Adele?",
"Coldplay released a new album",
"I watched the premiere of Succession season 3 last night",
"Who is your favorite team in the NBA?",
"I want to buy the new Travis Scott x Jordan shoes",
"I want a Lamborghini Urus",
"Have you been to the Philippines?",
"Did you sign up for a Netflix account?"
]
def on_start(self):
pass
def on_quit(self):
pass
#task
def send_convo(self):
end = False
ws_url = "ws://xx.xx.xx.xx:8080/websocket"
self.ws = create_connection(ws_url)
body = json.dumps({"text": "start blender"})
self.ws.send(body)
while True:
#print("Waiting for response..")
response = self.ws.recv()
if response != None:
if "Sorry, this world closed" in response:
end = True
break
if not end:
body = json.dumps({"text": "begin"})
self.ws.send(body)
while True:
#print("Waiting for response..")
response = self.ws.recv()
if response != None:
# print("[BOT]: ", response)
if "Sorry, this world closed" in response:
end = True
self.ws.close()
break
if not end:
body = json.dumps({"text": random.choice(self.statements)})
start_at = time.time()
self.ws.send(body)
while True:
response = self.ws.recv()
if response != None:
if "Sorry, this world closed" not in response:
response_time = int((time.time() - start_at)*1000)
print(f"[BOT]Response: {response}")
response_length = len(response)
events.request_success.fire(
request_type='Websocker Recv',
name='test/ws/echo',
response_time=response_time,
response_length=response_length,
)
else:
end = True
self.ws.close()
break
if not end:
body = json.dumps({"text": "[DONE]"})
self.ws.send(body)
while True:
response = self.ws.recv()
if response != None:
if "Sorry, this world closed" in response:
end = True
self.ws.close()
break
if not end:
time.sleep(1)
body = json.dumps({"text": "EXIT"})
self.ws.send(body)
time.sleep(1)
self.ws.close()
class WebsiteUser(HttpUser):
tasks = [UserBehavior]
wait_time = constant(2)
host = "ws://xx.xx.xx.xx:8080/websocket"
For this particular test, I set the maximum users to 1 and the hatch rate to 1 and clearly, locust keeps on sending 1 request per second as seen on the following responsees:
[BOT]Response: {"text": "No, I don't have a netflix account. I do have a Hulu account, though.", "quick_replies": null}
enter code here
[BOT]Response: {"text": "I have not, but I would love to go. I have always wanted to visit the Philippines.", "quick_replies": null
[BOT]Response: {"text": "No, I don't have a netflix account. I do have a Hulu account, though.", "quick_replies": null}
[BOT]Response: {"text": "I think it's going to be New Orleans. _POTENTIALLY_UNSAFE__", "quick_replies": null}
My expectation is after I set the maximum user to 1, and a hatch rate of 1, there would instantly be 1 websocket connection sending a random message, and receiving 1 main response from the websocket server. but what's happening is it keeps on repeating the task per second until i explicitly hit the stop button on the locust dashboard.
I would debug your logic. Put more print statements in each if block at various places and between each block. When dealing with a long list of decisions, it's easy to get things tripped up.
In this case, you are only wanting to sleep in a very specific situation but it's not happening. Most likely you're setting end = True when you're not expecting it so you're not sleeping and are immediately going to get a new user.
EDIT:
Reviewing your question and issue description again, it sounds like you expect Locust to send a single request and then never send another one. That's not how Locust works. Locust will run your task code for a user. When it's done, that user goes away and it waits for a certain amount of time (looks like you have it set to 2 seconds) and then it spawns another user and starts the task over again. The idea is it will try to keep a near constant number of users you tell it to. It will not only run 1000 users and then end the test, by default.
If you want to keep all 1000 users running, you need to make them continue to execute code. For example, you could put everything in your task in another while loop with another way to break out and end. That way even after making your socket connection and sending the single message you expect, the user will stay alive in the loop and won't end because it ran out of things to do. Doing it this way requires a lot more work and coordination but is possible. There may be other questions on SO about different approaches if this isn't exactly what you're looking for.

Openresty concurrent requests

I would like to use OpenResty with Lua interpreter.
I can't make the OpenResty framework to handle two concurrent requests to two separate endpoints. I simulate that one request is doing some hard calculations by running in a long loop:
local function busyWaiting()
local self = coroutine.running()
local i = 1
while i < 9999999 do
i = i + 1
coroutine.yield(self)
end
end
local self = coroutine.running()
local thread = ngx.thread.spawn(busyWaiting)
while (coroutine.status(thread) ~= 'zombie') do
coroutine.yield(self)
end
ngx.say('test1!')
The other endpoint just sends response immediately.
ngx.say('test2')
I send a request to the first endpoint and then I send a second request to the second endpoint. However, the OpenResty is blocked by the first request and so I receive both responses almost at the same time.
Setting nginx parameter worker_processes 1; to higher number does not help either and I would like to have only single worker process anyway.
What is the proper way to let OpenResty handle additional requests and not to get blocked by the first request?
local function busyWaiting()
local self = ngx.coroutine.running()
local i = 1
while i < 9999999 do
i = i + 1
ngx.coroutine.yield(self)
end
end
local thread = ngx.thread.spawn(busyWaiting)
while (ngx.coroutine.status(thread) ~= 'dead') do
ngx.coroutine.resume(thread)
end
ngx.say('test1!')

OSX- main thread is in wait how to get intrrupt pipe response

I am developing a device driver on mac. my question is how can we make a device request asynchronous to synchronous. like i send a send encapsulated command to device and get it response using get encapsulated command after getting a notification on interrupt pipe. so how can i make my thread will wait until all above request is not completed (both send and get) . but the function from get encap is called is a virtual function and called by upper layer. so if i process a wait in that virtual function then i am not able to get response till my tread is in waiting process.
please help me to resolve this problem.
thnks in advance.
**
bool class::USBSetPacketFilter()
{
IOReturn Value
.... .................
value = send_Encasulated_command(* of structure, length);
IOLocksleepdeadline(x, y, z, w);
global variable which is updated when get_Encap is completed.
if (Value == IOSuccess &&XVZ == true)
return true;
else return false
}
**
in other function to readinterrupt pipe
pipe->Read(mMemDes,&**m_CommInfo**,NULL);
in m_CommInfo call back function we check it is a device response or not then we call get_encapsulated function to complete the request and IOLockwakeup(x,y,z) to revoke the thread
.
but when the upper layer call USBSetPacketFilter(). MY code stuck on IOLocksleepdeadline till the time out is not completed. so thread did not go to read interputpipe.

Using Socket and Threads Why Is my Chat Server so Slow and CPU Usage so High?

So my code works fine except for when iterating through arrays and sending a response to multiple chat clients the latency between each client's reception of the response is nearly a second. I'm running the server and client on my computer so there shouldn't be any real latency, right? I know ruby isn't this slow. Also, why does my computer's fan spin up when running this? There's a bit more if it would be helpful to include it.
# Creates a thread per client that listens for any messages and relays them to the server viewer and all the other clients.
create_client_listener_threads = Thread.new do
x = nil
client_quantity = 0
# Loops indefinitely
until x != nil
#Checks to see if clients have joined since last check.
if #client_join_order_array.size > client_quantity
# Derives number of new arrivals.
number_of_new_arrivals = #client_join_order_array.size - client_quantity
# Updates number of clients in client_quantity.
client_quantity = #client_join_order_array.size
if number_of_new_arrivals != 0
# Passes new arrivals into client for their thread creation.
#client_join_order_array[-1 * number_of_new_arrivals..-1].each do |client|
# Creates thread to handle receiving of each client's text.
client_thread = Thread.new do
loop do
text = client.acception.gets
# Displays text for server viewer.
puts "#{client.handle} # #{Time.now} said: #{text}"
#client_hash.each_value do |value|
if value.handle != client.handle
# Displays text for everyone except server viewr and person who spoke.
value.acception.puts "#{client.handle} # #{Time.now} said: #{text}"
end
end
end
end
end
end
end
end
end
Instead of testing if #client_join_order_array.size > client_quantity, and doing nothing except smoke the CPU if it is false, you should be accepting the new connection at this point, and blocking until there is one. In other words move the code that accepts connections and adds them to the array here.

Ruby XMPP4R bot and Threads - trouble

I want my bot sends and receives messages in parallel threads. I also want my bot sends message back to user when receives any message from user. But now he sends it back to user every 5 seconds. I understand that it's because i used "loop do" but without infinity loop i cant use callbacks.
So how to send and receive messages in parallel threads? And how to overcome my "loop problem" when receiving messages?
require 'xmpp4r'
class Bot
include Jabber
def initialize jid,jpassword
#jid = jid
#jpassword = jpassword
#client = Client.new(JID::new(#jid))
#client.connect
#client.auth(#jpassword)
#client.send(Presence.new.set_type(:available))
end
def wait4msg
loop do
#client.add_message_callback do |msg|
send_message(msg.from,msg.body)
sleep 5
end
end
end
def send_message to,message
msg = Message::new(to,message)
msg.type = :chat
#client.send(msg)
end
def add_user jid
adding = Presence.new.set_type(:subscribe).set_to(jid)
#client.send(adding)
end
end
bot = Bot.new('from#example.xmpp','123456')
t1 = Thread.new do
bot.wait4msg
end
t2 = Thread.new do
bot.send_message('to#example.xmpp',Random.new.rand(100).to_s)
end
Thread.list.each { |t| t.join if t != Thread.main }
Good day. You can use callbacks without loop, see an examples. For example: in initialize add
#client.add_message_callback do |m|
if m.type != :error
m2 = Message.new(m.from, "You sent: #{m.body}")
m2.type = m.type
#client.send(m2)
end
end

Resources