How to republish messages with pika and RabbitMQ - python-3.x

I need to listen to RabbitMQ messages, process each message just a little bit, and submit it to another exchange. Each example I have seen so far includes either this:
reader_connection.ioloop.start()
or this:
writer_connection.ioloop.start()
Because I need to both receive and send messages, I probably need to run both loops at the same time. Is there a way I could accomplish that?

There is no difference between publishers and subscribers. You can publish and subscribe using same connection or different(then you will need to start ioloop for both of them)
You can find some examples here: https://pika.readthedocs.org/en/0.10.0/examples/connecting_async.html

Related

Push notification to millions of device + Apns + node.js

My application stack is ios(front-end) and node.js(back-end). I have to send notification to devices. In my node.js part im using apns module to send notification, its working fine......
Now i have to send Mass notification like at a time consider i have 10,000 devices to notified, the logic what im following is
I'm looping through 10,000 devices and calling apns provider.
1.Why this for loop approach
I have to store each notification details in my mongodb collection, so i followed this approach.
The problem is the notification is received by some devices and that too very late(next day).
I read the link also
https://www.raywenderlich.com/156966/push-notifications-tutorial-getting-started
saying apns will reject.
Is the above approach is correct also any way to make all notification deliverer.
Please share your ideas. Thanks in advance.
If you need to process each individual notification before/after it is sent I would instead recommend a design change from a loop and have you look at job queue instead.
With this design pattern, instead of your only step being to loop over notifications and send via APN, you push these notification into a queue/messaging system and have workers which pull from the queue and process (send via APN and write to mongo) the notifications. The nice part of this design is that as your application grows you can add on more workers to handle the increased load without rewriting your application/architecture. Once you have it built it may look something like this:
I personally use RabbitMQ for my job queue, but that decision is something you need to research on your own. For example if you don't want to manage the messaging system you could look into something like AWS Simple Queue Service.
I think looping through 10,000 devices ids and calling APNS provider is not the right way forward. The documentations strictly says here node-apn readme file to reuse apn.Provider rather than recreate it every time to achieve the best possible performance.
If you send notification using arrays of device ids rather than just a device id then you will get a response from the APNS mentioning all the details for each device.

Chat / System Communication App (Nodejs + RabbitMQ)

So i currently have a chat system running NodeJS that passes messages via rabbit and each connected user has their own unique queue that subscribed and only listening to messages (for only them). The backend can also use this chat pipeline to communicate other system messages like notifications/friend requests and other user event driven information.
Currently the backend would have to loop and publish each message 1 by 1 per user even if the payload of the message is the same for let's say 1000 users. I would like to get away from that and be able to send the same message to multiple different users but not EVERY user who's connected.
(example : notifying certain users their friend has come online).
I considered implementing a rabbit queue system where all messages are pooled into the same queue and instead of rabbit sending all user queues node takes these messages and emit's the message to the appropriate user via socket connections (to whoever is online).
Proposed - infrastructure
This way the backend does not need to loop for 100s and 1000s of users and can send a single payload containing all users this message should go to. I do plan to cluster the nodejs servers together.
I was also wondering since ive never done this in a production environment, will i need to track each socketID.
Potential pitfalls i've identified so far:
slower since 1000s of messages can pile up in a single queue.
manually storing socket IDs to manually trasmit to users.
offloading routing to NodeJS instead of RabbitMQ
Has anyone done anything like this before? If so, what are your recommendations. Is it better to scale with user unique queues, or pool all grouped messages for all users into smaller (but larger pools) of queues.
as a general rule, queue-per-user is an anti-pattern. there are some valid uses of this, but i've never seen it be a good idea for a chat app (in spite of all the demos that use this example)
RabbitMQ can be a great tool for facilitating the delivery of messages between systems, but it shouldn't be used to push messages to users.
I considered implementing a rabbit queue system where all messages are pooled into the same queue and instead of rabbit sending all user queues node takes these messages and emit's the message to the appropriate user via socket connections (to whoever is online).
this is heading down the right direction, but you have to remember that RabbitMQ is not a database (see previous link, again).
you can't randomly seek specific messages that are sitting in the queue and then leave them there. they are first in, first out.
in a chat app, i would have rabbitmq handling the message delivery between your systems, but not involved in delivery to the user.
your thoughts on using web sockets are going to be the direction you want to head for this. either that, or Server Sent Events.
if you need persistence of messages (history, search, last-viewed location, etc) then use a database for that. keep a timestamp or other marker of where the user left off, and push messages to them starting at that spot.
you're concerns about tracking sockets for the users are definitely something to think about.
if you have multiple instances of your node server running sockets with different users connected, you'll need a way to know which users are connected to which node server.
this may be a good use case for rabbitmq - but not in a queue-per-user manner. rather, in a binding-per-user. you could have each node server create a queue to receive messages from the exchange where messages are published. the node server would then create a binding between the exchange and queue based on the user id that is logged in to that particular node server
this could lead to an overwhelming number of bindings in rmq, though.
you may need a more intelligent method of tracking which server has which users connected, or just ignore that entirely and broadcast every message to every node server. in that case, each server would publish an event through the websocket based on the who the message should be delivered to.
if you're using a smart enough websocket library, it will only send the message to the people that need it. socket.io did this, i know, and i'm sure other websocket libraries are smart like this, as well.
...
I probably haven't given you a concrete answer to your situation, and I'm sure you have a lot more context to consider. hopefully this will get you down the right path, though.

Can I mq_send to reply after I mq_recieve?

I have one or more daemon app running and to communicate with it I have a client app. The client app is something simple executed on the command line. Chances are only one will be up at a given moment. When I do a command such as daemon update-config the client does mq_open and sends the command. Some commands like list I'd want results. It appears that if I run mq_send in my daemon after I receive I may receive the message within the daemon app.
What's the best way to send the reply to the client w/o accidentally processing it in the daemon? After a quick lookup there didn't appear to be an obvious solution so I do sleep(1) which seems to solve my problem completely even though it's a 'hack'. Whats the best solution? is sleep the most understandable and straightforward solution? I don't feel like generating random/unique values, passing it in and opening another mq to send it. The sleep for a second feels like the best solution but I wonder what your solutions may be.
When using messaging systems, you can do RPC calls even if it is not the best paradigm to use messaging in general. The general approach to RPC with messaging is:
have distinct queues for requests and for replies (the latter ones can be ephemeral queues, created for each request, or persistent queues);
give to each message a unique ID, that will be used in the replies to identify which message it was replying to. (it's called correlation_id in AMQP for example).
I do guess that you can use the same approach with Posix queues as well.

Using PubNub, is Unsubscribe a dual use command for Publish and Subscribe?

Yes, I know it seems like a simple question but I just recently started using PubNub and I am confused on how to disconnect from a channel. I think the command to use is "Unsubscribe" and my misunderstanding relates to the dual use of the word.
Logically, I understand that once you initialize PubNub and publish a message a separate process can subscribe to the establish channel. When it's done it unsubscribes. Got it!
Now we want to completely disconnect from PubNub. That is end the channel.
Do I use the command "Unsubscribe" to do this? I guess I am logically looking for an "End" or "Disconnect" command and not an "Unsubscribe" command because it did not subscribe to the channel, it established the channel. I know it seems petty but until I understand this it's difficult to move forward. So is this a dual use command?
Thanks
You are on the right track here. Depending on the client platform in question, an unsubscribe resulting in an empty channel list will completely disconnect you.
On the more sophisticated clients, advanced/smart frameworks, there are the API calls of un/subscribe (which as you described subs /unsubs you to a specific channel), and separately, the public and/or private method calls defining/detecting being "connected" or "online".
For example, iOS has specific connect and disconnect calls, separate from subscribe/unsubscribe calls. On JS, there is no explicit connect/disconnect, but regardless if you are subbed or not to an active channel list, there may be background "pings/heartbeats" being made to the PN cloud to detect connectivity/online/offline state.
If you give more info on the client platform and version you are on, we can give you more info on how to completely sever all connects to the PN cloud and achieve a "complete disconnect".
geremy

How can i detect that publisher is disconnected with ZeroMQ and Node.js

I am using Node.js + ZeroMQ for subscribing to a certain feed using the PUB/SUB pattern.
How could i detect the condition where my publisher is disconnected? (I am connected as a subscriber)
Another thing: is there a way to get automatically messages from the past when i first connected to the publisher?
Thanks in advance
You could publish a heartbeat and if your subscriber misses one or more in-a-row you can assume that you lost the connection and try to reconnect.
To get the messages from the past you need to use a different pattern, like REQuesting those missing messages. In this case you need a way to identify which messages are missing.
In ZeroMQ's default pubsub model, there's no way for the subscriber to get messages from the past. See the ZeroMQ documentation, where you find statements like
If you start the SUB socket (i.e., establish a connection to a PUB
socket) after the PUB socket has started sending out data, you will
lose whatever it published before the connection was made. If this is
a problem, set up your architecture so the SUB socket starts first,
then the PUB socket starts publishing.
and
Pub-sub is like a radio broadcast; you miss everything before you
join, and then how much information you get depends on the quality of
your reception.

Resources