My question is simple:
Now this code sends empty message to subject chan.01.msg and gets message that is being currently broadcasted or prints nats: timeout. Altogether this request message is also shown (something like: Received a message on chan.01.msg _INBOX.<hash_my>.<salt_up>: b'') on subject and is not desirable there. I do filter it in callback, but I really feel that it's kinda wrong way to do it.
Can I just pull messages with desired subject?
async def msgcb(msg):
"""
Message callback function
"""
subject = msg.subject
reply = msg.reply
data = msg.data
if len(data) > 0:
print(f"Received a message on {subject} {reply}: {data}")
logging.debug("Prepare to subscribe")
sub = await nc.subscribe(subject="chan.01.msg", cb=msgcb)
logging.debug("loop process messages on subject")
while True:
await asyncio.sleep(1)
try:
resp = await nc.request('chan.01.msg')
print(resp)
except Exception as e:
print(e)
You are subscribing to the same subject where you are publishing so naturally would get the message when sending a request. To avoid receiving messages the same client produces you can use the no_echo option on connect.
Related
I have an AMQP publisher class with the following methods. on_response is the callback that is called when a consumer sends back a message to the RPC queue I setup. I.e. the self.callback_queue.name you see in the reply_to of the Message. publish publishes out to a direct exchange with a routing key that has multiple consumers (very similar to a fanout), and multiple responses come back. I create a number of futures equal to the number of responses I expect, and asyncio.wait for those futures to complete. As I get responses back on the queue and consume them, I set the result to the futures.
async def on_response(self, message: IncomingMessage):
if message.correlation_id is None:
logger.error(f"Bad message {message!r}")
await message.ack()
return
body = message.body.decode('UTF-8')
future = self.futures[message.correlation_id].pop()
if hasattr(body, 'error'):
future.set_execption(body)
else:
future.set_result(body)
await message.ack()
async def publish(self, routing_key, expected_response_count, msg, timeout=None, return_partial=False):
if not self.connected:
logger.info("Publisher not connected. Waiting to connect first.")
await self.connect()
correlation_id = str(uuid.uuid4())
futures = [self.loop.create_future() for _ in range(expected_response_count)]
self.futures[correlation_id] = futures
await self.exchange.publish(
Message(
str(msg).encode(),
content_type="text/plain",
correlation_id=correlation_id,
reply_to=self.callback_queue.name,
),
routing_key=routing_key,
)
done, pending = await asyncio.wait(futures, timeout=timeout, return_when=asyncio.FIRST_EXCEPTION)
if not return_partial and pending:
raise asyncio.TimeoutError(f'Failed to return all results for publish to {routing_key}')
for f in pending:
f.cancel()
del self.futures[correlation_id]
results = []
for future in done:
try:
results.append(json.loads(future.result()))
except json.decoder.JSONDecodeError as e:
logger.error(f'Client did not return JSON!! {e!r}')
logger.info(future.result())
return results
My goal is to either wait until all futures are finished, or a timeout occurs. This is all working nicely at the moment. What doesn't work, is when I added return_when=asyncio.FIRST_EXCEPTION, the asyncio.wait.. does not finish after the first call of future.set_exception(...) as I thought it would.
What do I need to do with the future so that when I get a response back and see that an error occurred on the consumer side (before the timeout, or even other responses) the await asyncio.wait will no longer be blocking. I was looking at the documentation and it says:
The function will return when any future finishes by raising an exception
when return_when=asyncio.FIRST_EXCEPTION. My first thought is that I'm not raising an exception in my future correctly, only, I'm having trouble finding out exactly how I should do that then. From the API documentation for the Future class, it looks like I'm doing the right thing.
When I created a minimum viable example, I realized I was actually doing things MOSTLY right after all, and I glanced over other errors causing this not to work. Here is my minimum example:
The most important change I had to do was actually pass in an Exception object.. (subclass of BaseException) do the set_exception method.
import asyncio
async def set_after(future, t, body, raise_exception):
await asyncio.sleep(t)
if raise_exception:
future.set_exception(Exception("problem"))
else:
future.set_result(body)
print(body)
async def main():
loop = asyncio.get_event_loop()
futures = [loop.create_future() for _ in range(2)]
asyncio.create_task(set_after(futures[0], 3, 'hello', raise_exception=True))
asyncio.create_task(set_after(futures[1], 7, 'world', raise_exception=False))
print(futures)
done, pending = await asyncio.wait(futures, timeout=10, return_when=asyncio.FIRST_EXCEPTION)
print(done)
print(pending)
asyncio.run(main())
In this line of code if hasattr(body, 'error'):, body was a string. I thought it was JSON at that point already. Should have been using "error" in body as my condition in any case. whoops!
Let's say you are sending a message through your bot to all servers.
If you do too many actions through the api at the same time, some actions will fail silently.
Is there any way to prevent this? Or can you make sure, that a message really has been sent?
for tmpChannel in tmpAllChannels:
channel = bot.get_channel(tmpChannel)
if(not channel == None):
try:
if(pText == None):
await channel.send(embed= pEmbed)
else:
await channel.send(pText, embed= pEmbed)
except Exception as e:
ExceptionHandler.handle(e)
for tmpChannel in tmpAllChannels:
channel = bot.get_channel(tmpChannel)
if(not channel == None):
try:
if(pText == None):
await channel.send(embed= pEmbed)
else:
await channel.send(pText, embed= pEmbed)
except Exception as e:
ExceptionHandler.handle(e)
finally:
await asyncio.sleep(1) # this will sleep the bot for 1 second
Never hammer the API without any delay between them. asyncio.sleep will put a delay between messages, and you will send all of them without failing.
I have a small Server written like this:
async def handle_client(reader, writer):
request = (await reader.read()).decode('utf8') # should read until end of msg
print(request)
response = "thx"
writer.write(response.encode('utf8'))
await writer.drain()
writer.close()
loop = asyncio.get_event_loop()
loop.create_task(asyncio.start_server(handle_client, socket.gethostname(), 8886))
loop.run_forever()
and a small client written like this:
async def tcp_echo_client(message):
reader, writer = await asyncio.open_connection(
my_ip, 8886)
print(f'Send: {message!r}')
writer.write(message.encode())
await writer.drain()
data = await reader.read() # should read until end of msg
print(f'Received: {data.decode()!r}')
print('Close the connection')
writer.close()
await writer.wait_closed()
asyncio.run(tcp_echo_client(f"Hello World!"))
the client and the server start the comunication but never finish it.
why does the reader not recognise end of msg?
if i write
request = (await reader.read(1024)).decode('utf8')
instead it works, but i need to recive undefined large amount of data.
i tried to modify the code of the server like this:
while True:
request = (await reader.read(1024)).decode('utf8')
if not request:
break
it recieves all data blocks but still waits forever after the last block. why?
how do i tell the reader from the server to stop listenig and proceed in the code to send the answer?
TCP connections are stream-based, which means that when you write a "message" to a socket, the bytes will be sent to the peer without including a delimiter between messages. The peer on the other side of the connection can retrieve the bytes, but it needs to figure out on its own how to slice them into "messages". This is why reading the last block appears to hang: read() simply waits for the peer to send more data.
To enable retrieval of individual messages, the sender must frame or delimit each message. For example, the sender could just close the connection after sending a message, which would allow the other side to read the message because it would be followed by the end-of-file indicator. However, that would allow sender to only send one message without the ability to read a response because the socket would be closed.
A better option is for the writer to only close the writing side of the socket, such partial close sometimes being referred to as shutdown). In asyncio this is done with a call to write_eof:
writer.write(message.encode())
await writer.drain()
writer.write_eof()
Sent like this, the message will be followed by end-of-file and the read on the server side won't hang. While the client will be able to read the response, it will still be limited to sending only one message because further writes will be impossible on the socket whose writing end was closed.
To implement communication consisting of an arbitrary number of requests and responses, you need to frame each message. A simple way to do so is by prefixing each message with message length:
writer.write(struct.pack('<L', len(request)))
writer.write(request)
The receiver first reads the message size and then the message itself:
size, = struct.unpack('<L', await reader.readexactly(4))
request = await reader.readexactly(size)
I'm trying to make SlackBot and if I call him in some public channel it works fine but when I call him (type slash-command) in any direct channel I receive "The server responded with: {'ok': False, 'error': 'channel_not_found'}". In public channels where I've invited my bot it works fine, but if I type "/my-command" in any DM-channel I receive response in separate DM-channel with my bot. I expect to receive these responses in that DM-channel where I type the command.
Here is some part of my code:
if slack_command("/command"):
self.open_quick_actions_message(user_id, channel_id)
return Response(status=status.HTTP_200_OK)
def open_quick_actions_message(self, user, channel):
"""
Opens message with quick actions.
"""
slack_template = ActionsMessage()
message = slack_template.get_quick_actions_payload(user=user)
client.chat_postEphemeral(channel=channel, user=user, **message)
Here are my Event Eubscriptions
Here are my Bot Token Scopes
Can anybody help me to solve this?
I've already solved my problem. Maybe it will help someone in the future. I've sent my payload as the immediate response as it was shown in the docs and the response_type by default is set to ephemeral.
The part of my code looks like this now:
if slack_command("/command"):
res = self.slack_template.get_quick_actions_payload(user_id)
return Response(data=res, status=status.HTTP_200_OK)
else:
res = {"text": "Sorry, slash command didn't match. Please try again."}
return Response(data=res, status=status.HTTP_200_OK)
Also I have an action-button and there I need to receive some response too. For this I used the response_url, here are the docs, and the requests library.
Part of this code is here:
if action.get("action_id", None) == "personal_settings_action":
self.open_personal_settings_message(response_url)
def open_personal_settings_message(self, response_url):
"""
Opens message with personal settings.
"""
message = self.slack_template.get_personal_settings_payload()
response = requests.post(f"{response_url}", data=json.dumps(message))
try:
response.raise_for_status()
except Exception as e:
log.error(f"personal settings message error: {e}")
P. S. It was my first question and first answer on StackOverflow, so don't judge me harshly.
so i am trying to set up a FIX server that will serve me to accept FIX messages.
i managed to receive messages from a demo client that i made.
i want to perform an action according to the received message and then return a response to the initiator.
i have added the code that i use, it somehow sends it through the acceptor back to the initiator.
import quickfix as fix
def create_fix_response_for_wm(self, session_id, message, api_response):
try:
report = quickfix50sp2.ExecutionReport()
report.setField(fix.Text(api_response.text))
report.setField(fix.ListID(message.getField(66)))
if message.getFieldIfSet(fix.OrderID()):
report.setField(fix.OrderID(message.getField(14)))
elif message.getFieldIfSet(fix.ListID()):
report.setField(fix.OrderID(message.getField(66)))
fix.Session.sendToTarget(report, session_id)
except Exception as e:
logger.exception(f'could not create response')
so after looking and testing a couple of times this is what i found.
import quickfix as fix
def create_fix_response_for_wm(self, session_id, message, api_response):
try:
report = quickfix50sp2.ExecutionReport()
report.setField(fix.Text(api_response.text))
report.setField(fix.ListID(message.getField(66)))
if message.getFieldIfSet(fix.OrderID()):
report.setField(fix.OrderID(message.getField(14)))
elif message.getFieldIfSet(fix.ListID()):
report.setField(fix.OrderID(message.getField(66)))
fix.Session.sendToTarget(report, session_id)
except Exception as e:
logger.exception(f'could not create response')
session_id = should be a SessionID object which can be built like this:
session_id = fix.SessionID("FIXT.1.1", <ACCEPTOR - SenderCompID of the acceptor configuration>, <INITIATOR - TargetCompID in the acceptor configuration >)
configuration example:
[SESSION]
SocketAcceptPort=12000
StartDay=sunday
EndDay=friday
AppDataDictionary=./glossary/FIX50SP2.xml
TransportDataDictionary=./glossary/FIXT11.xml
DefaultApplVerID=FIX.5.0SP2
BeginString=FIXT.1.1
SenderCompID=MYACCEPTOR
TargetCompID=CLIENT
session_id = fix.SessionID("FIXT.1.1","MYACCEPTOR" , "CLIENT") - in our case
you you this sessionID object when sending.
fix.Session.sendToTarget(<YOUR MESSAGE>, session_id)