GCS Object Change Notification, pub/sub unable to receive any thing - python-3.x

Following up to quesiton! since I can't comment.
I followed #Brandon Yarbrough instruction everything is configured the problem is I am not receiving anything script saying
Listening for messages on projects/[project_id]/subscriptions/projects/[project_id]/subscriptions/subtestbucketthhh

Related

How to keep bi-directional gRPC channel open from python client

I have defined a bi-dir gRPC streaming RPC for exchanging some configuration information with the server.
Server: Implemented in C++
Client: Implemented in python 3.10
The client opens channel with server using below:
grpc.channel_ready_future(self.channel).result(timeout=5)
As and when configuration is changed in the client, it builds the gRPC message and yields the message to server by calling the bi-dir RPC (RPC name "Push"). The client never explicitly closes the channel by calling close. But, as soon as server receives the message, the channel gets reset, so the next message from client uses another channel.
My question is what am I doing wrong in the client and why channel gets closed? One thing is that server actually doesn't send anything back so if I have a for loop as below client code hangs:
responses = self.stub.Push(
make_client_msg(msg),timeout=timeout)
for response in responses:
logging.info("response is {}".format(response))
so I removed for loop and just printing responses, but the channel gets reset:
responses = self.stub.Push(
make_client_msg(msg),timeout=timeout)
logging.info("response is {}".format(responses))
I hope I was able to explain the problem without getting too much into details.
++++++++
Update:
I was able to solve the channel reset issue by creating two subprocess, first sync process builds gRPC message and pushes inside a message queue and second process which is async, reads from the queue and writes the message to gRPC channel in a while loop. I never call done_writing() after writing to gRPC channel which keeps my channel alive and I need not create a new channel for every write which is an expensive operation. I earlier implemented the entire thing using python iterator, but it was sending EOM at the end of iterator which reset my gRPC channel. Not many examples are there around this scenario but thanks for some helpful comments.

Sending updated messages to Socket Server using selectors library

I have followed a tutorial on Python Sockets and trying to modify the client side to send updated data but in vain. The source code of the tutorial can be found here. The problem is I am not able to understand how to send updated message with Python selectors after registering the connection. The example I am trying to modify is:
Server works fine.
Client Here I want to send data continously at runtime.
Here in the example two messages are sent from the client side but they are defined at time of registering the connection but what I want to do is to send updated data at runtime.
I have tried to modify the object by using modify method from the selectors library but that didn't worked.
One idea I am having is to trigger a write event from client side after updating the message but how to do so I am not able to find also after spending quite some time.
Any ideas on how to send updated messages at runtime using selectors will highly be appreciated.
Edit 01
From server and client side this and this line needs to be commented out to prevent connection close after first message

Poco::Net::HTTPSClientSession receiveResponse always truncated abnormally

Encountered a very weird issue.
I have two VMs, running CentOS Linux.
Server side has a REST API (Using none-Poco socket), and one of the API is to response a POST.
On the client side, use POCO library to call the REST.
If the returned message is long, it will be truncated at 176 k, or 240 k, or 288 k.
Same code, same environment, running on server side, Good.
On the client VM, use python to do the REST call, Good.
ONLY failed if I use the same good code, on client VM.
When msg got truncated, the https status code always return 200
On the server side, I logged the response message that I sent every time. Everything looks normal.
I have tried whole bunch of things, like:
set the socket timeout and receiveResponse timeout to an hour
wait for 2 seconds after I send the request but before I call the receive
Set the receive buffer big enough
Try whole bunch of approach to make sure receive stream is empty, no more data
It just does not work.
Anyone have similar issue? I started pulling my hair.... Please talk to me, anything... before I am bold.

How to act upon error log additions?

Normally I watch logs using tail -f /the/error.log, but I normally only act upon them when I hear people complain. I know you can send them to an email address, but email simply sucks. So I rather want the error logs of my server to be sent to a dedicated slack channel.
My question is: how can I watch additions to the error log and catch those additions in a variable so I can send them as json to the slack webhook? I also wonder, what if the error logs is more than one line? I don't want a 20-line error to be sent as 20 separate messages.
All tips are welcome!

Messages Delivered Through IIS No Longer Appearing in Incoming Queue

I am attempting to send MSMQ messages to a remote private queue. I have the MSMQ server configured to receive incoming messages through IIS.
This configuration was working fine, however messages are no longer being placed in the incoming message queue. The IIS logs are showing a response of 200 indicating success, and the sent messages are appearing in the outbound queue and indicate they were delivered.
The queues are write-permissioned to everyone.
The queue connection looks like such
DIRECT=https://somedomain.com/msmq/private$/queueName
Thanks!
Edit: I was finally able to work around this issue by using a non-transactional queue. One thing that really helped was enabling dead letter queues. This allowed us to see that we were actually getting a 400 error from the destination server. After that, we implemented http msmq message redirection this was necesary since the msmq server specified from the source to destination wasn't what the destination server expected. This still resulted in 400 errors so finally we made the queue non-transactional and everything was working as expected.

Resources