How can I control acknowledgement in Cloud PubSub using Node.js - node.js

Basically I have created a cloud function(written a Node.js code) which will trigger on the message of cloud pubsub topic and will load that data to Bigquery table.
A message in a topic gets deleted after reading it by cloud function. I understand that subscriber internally sends acknowledgement and result of that, message gets deleted from topic.
I want to control the acknowledgement sent to publisher. How can it be achieved, didn't find any document on this.

Google Cloud Functions does not allow you to control the acknowledgement of the Cloud Pub/Sub message. Upon completion of the function, the message is acknowledged for the subscription. If you want finer-grained control over acknowledgements, then you will need to use Google Cloud Pub/Sub directly. There is a Node.js client library.
Just some clarifying notes on acknowledgements: Acknowledging a message for a single subscription doesn't mean the message is deleted for the topic, only for the subscription. Other independent subscriptions will still receive the message and have to acknowledge it. This is also independent of the acknowledgement sent to the publisher. When a Google Cloud Pub/Sub message is published, the publish call is acknowledged (i.e., a response is sent to the publisher) once Google Cloud Pub/Sub has saved the message and guarantees it will be delivered to subscriptions. This is one of the main advantages of an asynchronous message delivery system: receiving the message from the publisher (and acknowledging the publish) is independent of delivering the message via a subscription (which is separately acknowledged by the subscriber).

If I understand correctly; you made a pub/sub topic and placed a cloud function within the same project as this topic. The cloud function is deployed with a google.pubsub.topic.publish trigger for the specified topic.
Since using a queue/topic, producer and consumer operate independently of each other. This enables a loosely coupled architecture, which has its own advantages and disadvantages.
If the publisher publishes a message to the topic, it gets confirmation that the message is sent to the topic successfully. Otherwise your code will give an exception (connection refused, forbidden, etc). For Node.js and other languages, there are pub/sub client sdk's which you can use to publish messages fairly easy.
When the message is on the topic, it will go to the subscribers, which can be push or pull subscriptions. At this point, acknowledgement is getting important. Google pub/sub, as do other queues/topics, are designed with guaranteed delivery. This means if a message could not be delivered, it will try again after some (configurable) time, until the total lifetime is exceeded (default is 7 days)
When using a pull subscription and want to let the topic know that you successfully received the message you would need something like this in Node.js:
message.ack();
When using a push subscription to an API or a HTTP cloud function, you would need to return a custom http code. Pub/sub expects a succes status code (e.g. 200 or 204):
res.status(204);
res.send('OK');

The only way I have found to reliably control what messages get acknowledged and don't in a cloud function is by using the REST Service APIs.
This is because the node.js pubsub client services acknowledgements and manages connections in the background. This is clearly forbidden in a cloud function.
However, the REST API's are fairly easy to use, and give fine grain control over what messages get acknowledged.

Related

FCM, send multiple devices without tokens?

I want to send FCM to everyone who installed the app. Is it essential to get everyone's tokens from the database every time?
My app is using firebase firestore overall. If there are 100,000 users,
do I have to read 100,000 from database to send fcm each time? (I think it`s little heavy stuff isn`t it?)
another workroad exists?
I wonder Is the only way to send it by putting it in the registration ID?
And can you send it on time? All apps on the market send push messages on time, but if you read 100,000 and send fcm separately, shouldn't it arrive like this at 9:01 or 9:02? But why do I always get messages at 9 o'clock?
What are the methods, logic, algorithms they use (the way companies usually use)
I still have no clue at all.
There is no "send to all users" operation in FCM. You either will have to send to each token (that's not a heave operation for FCM, which handles billions of such calls every second), or you have to subscribe all instances to a specific topic and then send to that topics (which ends up the same behind the scenes, just with Firebase loading the tokens for the topic for you).
This has been covered a few times before, so I recommend checking:
How do you send a Firebase Notification to all devices via CURL?
How to send notifications to all devices using Firebase Cloud Messaging
Firebase Cloud Messaging - Send message to all users
The notifications panel in the Firebase console has an option to deliver messages at a specific time, but no such option exists in the Firebase Cloud Messaging API. You'll have to either implement your own mechanism to schedule the delivery, or you can deliver a data message right away and then only display the notification on the device when it's time.
This also has been covered a few times before, so check:
Firebase Messaging FCM Distribution over configurable time interval
How can scheduled Firebase Cloud Messaging notifications be made outside of the Firebase Console?
Flutter Firebase Messaging: How to send push notifications to users at specified time

How to Send message to Azure Service Bus Subscription Deadletter using Rest Api with deadletter Reason and Error Description?

I can't find an example of how to send a message to a Azure Service Bus Subscription DeadLetter using rest api. It appears that the suffix for the endpoint should be /Subscriptions//$deadletterqueue. However, I can't find an example of how to pass the deadletterReason, and the deadLetterErrorDescription. Is it be as simple as passing those values as message headers?
Messages are not sent directly to the dead-letter queue by the client code (REST API or any other SDK). Instead, messages are dead-lettered by the broker when MaxDeliveryCount is exceeded and no more attempts to process the message can be made. That's when the broker will move the message to the dead-letter queue with the reason.
Not that there are also less common reasons, such as the number of hops (forwarding), expired time-to-live, etc. MaxDeliveryCount is the most common scenario.
Microsoft documentation will help in addition to this post.

Peek and Complete Message using different Receiver Instances - Azure Service Bus

Scenario
When business transactions are performed, we're supposed to make that data available to end clients.
Current Design
Our web app publishes transaction messages are added to a topic on the Azure Service Bus.
We expose APIs to clients through which they can consume the data from those transactions.
Upon calling these APIs, we read the messages from the Subscription and return it to the client.
Problem
We want a guaranteed delivery - we want to make sure the client acknowledges the delivery of the data. So we don't want to remove the message from the subscription immediately. We want to keep it until the client acknowledges it.
So we only want to do a "Peek" instead of "Receive".
So the client calls the first API, to get the data, where we do a Peek.
And once the client has received the packets, the client would call a second API, to acknowledge.
At this point, we want to remove the message from the Subscription, making it Complete.
The current design of the Service Bus Message Receiver is that, a Complete can be performed only using the same Receiver instance that performed the Peek, as per the documentation, and we also observed the same when we tried it out.
Both the APIs, are two separate APIs and we cannot do the Peek and Complete using the same instance of the Receiver.
Thinking about options to somehow make the Receiver as a Singleton, across APIs within that App Service.
However this will be a problem when the App Service scales out.
Is there a different way to achieve what we're trying to do here ?
There is an option available in Azure Service Bus to defer messages. Once a message is deferred, it can be received with the help of it's sequence number.
The first client should receive the message and instead of completing it, it should defer it and return it.
The second client (which has sequence number) can receive the message from the Subscription. Refer here for more details.
Another option would be to not use a Service Bus Client on your backend and instead your clients could directly work with Service Bus using its Service REST API (assuming they can't use the AMQP client if I am understanding your scenario correctly).
There are APIs to
Peek-Lock
Renew Lock
Unlock
Delete (Complete)
You could also proxy these requests if you'd like using your backend itself or a service like APIM if you are already using it.
PS: Cross posting the answer for the same query on the MSDN forum

Azure Topics - Multiple Listeners on Same Subscription

Is there a way to have multiple listening clients on one Azure Topic Subscription, and they all recieve ALL messages?
My understanding is that the only implementation of a Subscription is that the Published message is only delivered to ONE client on that subscription, as it is like a queue.
Can these messages be copied to multiple clients using the same Subscription?
EDIT: Potential use case example
A server notifies all of its clients (web clients via browser, or application), that are subscribed to the topic, of an object that has changed its value
More simply, multiple PCs are able to see a data value change
EDIT 2: My setup/what I'm looking for
The issue that I am running into is that a message is marked as consumed by one client, and not delivered to the other client. I have 3 PCs in a test environment:(1 PC publishing messages (we'll call this the Publisher) to the topic, and 2 other PCs subscribed to the topic using the same SubscriptionName (We'll call these Client 1 and Client 2)).
So we have this setup:
Publisher - Publishes to topic
Client 1 - Subscibed using SubscriptionName = Test1
Client 2 - Subscribed using SubscriptionName = Test1
The Publisher publishes 10 messages to the topic.
Client 1 gets Message 0
Client 2 gets Message 1
Client 1 gets Message 2
... And so on (Not all 10 messages are recieved by both Client 1 and Client 2)
I want the Clients to recieve ALL messages, like this:
Client 1 AND Client 2 get Message 0
Client 1 AND Client 2 get Message 1
Client 1 AND Client 2 get Message 2
... And so on.
Service Bus is a one-to-one or end-to-end messaging system.
What you need is Azure Event Hub or Event Grid.
It is not possible for both the client1 and client2 to get the same messsage.
To put it straight, when a message is received by client1 from a subscription and processed successfully, the message is removed from the subscription, so the client2 will not be able to receive the same message again.
Hope this clarifies.
Yes, its a one-to-one implementation, but, if you have real concern about message processing completing in sequential order then it depends on the Receive mode.
You can specify two different modes in which Service Bus receives messages.
Receive and delete.
In this mode, when Service Bus receives the request from the consumer, it marks the message as being consumed and returns it to the consumer application. This mode is the simplest model. It works best for scenarios in which the application can tolerate not processing a message if a failure occurs. To understand this scenario, consider a scenario in which the consumer issues the receive request and then crashes before processing it. As Service Bus marks the message as being consumed, the application begins consuming messages upon restart. It will miss the message that it consumed before the crash.
Peek lock.
In this mode, the receive operation becomes two-stage, which makes it possible to support applications that can't tolerate missing messages.
Finds the next message to be consumed, locks it to prevent other consumers from receiving it, and then, return the message to the application.
After the application finishes processing the message, it requests the Service Bus service to complete the second stage of the receive process. Then, the service marks the message as being consumed.
If the application is unable to process the message for some reason, it can request the Service Bus service to abandon the message. Service Bus unlocks the message and makes it available to be received again, either by the same consumer or by another competing consumer. Secondly, there's a timeout associated with the lock. If the application fails to process the message before the lock timeout expires, Service Bus unlocks the message and makes it available to be received again.
If the application crashes after it processes the message, but before it requests the Service Bus service to complete the message, Service Bus redelivers the message to the application when it restarts. This process is often called at-least once processing. That is, each message is processed at least once. However, in certain situations the same message may be redelivered. If your scenario can't tolerate duplicate processing, add additional logic in your application to detect duplicates. For more information, see Duplicate detection. This feature is known as exactly once processing.
Check this link for more details.

Should I use Azure Service (such as Scheduler) for sending rest messages to my bot, or use a separate thread for notifications?

I am creating a bot using Microsoft Bot Framework (BotBuilder) and want it to message the user when an appointment is about to begin.
I currently use Microsoft Graph api to access the user's Office 365 calendar and store the appointments. A background thread then keeps track of time and then messages the user when an appointment is about to start.
The current idea is to use Graph webhooks to notify my bot about new appointments.
My question is, would it be smarter to use an Azure service (such as Scheduler) to keep track of the appointments, and send rest messages to my bot, which will then send a message to the user?
My worry is, that as the amount of users rise, the amount of appointments and time checks will become too large, and that maybe Azure services would be able to handle it better.
This is a perfect fit for Azure Functions with a HTTP Trigger.
https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-http-webhook
This article explains how to configure and work with HTTP triggers and bindings in Azure Functions. With these, you can use Azure Functions to build serverless APIs and respond to webhooks.
Azure Functions provides the following bindings:
An HTTP trigger lets you invoke a function with an HTTP request. This can be customized to respond to webhooks.
An HTTP output binding allows you to respond to the request.

Resources