I am evaluating MessageBird service. I got a Virtual Mobile Number. I am able to send message to dummy numbers (until i get approval for sending messages to real USA number)
Unknown: My problem is about reading the messages received by a VMN.
Details: If I as a VMN owner send a message to consumer e.g. +1(111)111-1111 and i am interested in reading the response from the consumer, how to do get it?
MessageBird documentation expects me to know the ID for response message object (or my understanding is wrong). The documentation is good but i don't see a way to programmatically achieve it. Any suggestions How to achieve it?
Thanks in advance!
Messagebird have a feature of forward incoming sms data through webhook(get or post method). if you set an url then Messagebird will forward every incoming sms to you(or your server). You can easily read get/post response.
Related
Intro
We're developing a system to support multiple real-time messages (chat) and updates (activity notifications).
That is, user A can receive via Web Socket messages for :
receiving new chat messages
receiving updates for some activity, for example if someone like their photo.
and more.
We use one single WebSocket connection to send all these different messages to the client.
However, we also support multiple applications/clients to be open by the user at the same time.
(i.e - user A connect on their web browser, and also from their mobile app, at the same time).
Architecture
We have a "Hub" that stores a map of UserId to a list of active websocket sessions.
(user:123 -> listOf(session#1, session#2))
Each client, once websocket connection is established, has its own Consumer which subscribes to a pulsar topic "userId" (e.g - user:123 topic).
If user A connected on both mobile and web, each client has its own Consumer to topic user:A.
When user A sends a new message from session #1 to user B, the flow is :
user makes a REST POST request to send a message.
service stores a new message to DB.
service sends a Pulsar message to topic user:B and user:A.
return 200 status code + created Message response.
Problem
If user A has two sessions open (two clients/websockets), and they send a message from session #1, how can we make sure only session #2 gets the message ?
Since user A has already received the 200 response with the created message in session #1, there's no need to send the message to him again by sending a message to his Consumer.
I'm not sure if it's a Pulsar configuration, or perhaps our architecture is wrong.
how can we make sure only session #2 gets the message ?
I'm going to address this at the app level.
Prepend a unique nonce (e.g. a guid) to each message sent.
Maintain a short list of recently sent nonces,
aging them out so we never have more than, say, half a dozen.
Upon receiving a message,
check to see if we sent it.
That is, check to see if its nonce is in the list.
If so, silently discard it.
Equivalently, name each connection.
You could roll a guid just once when a new websocket is opened.
Or you could incorporate some of the websocket's addressing
bits into the name.
Prepend the connection name to each outbound message.
Discard any received message which has "sender" of "self".
With this de-dup'ing approach
there's still some wasted network bandwidth.
We can quibble about it if you wish.
When the K-th websocket is created,
we could create K topics,
each excluding a different endpoint.
Sounds like more work than it's worth!
I'm creating a chat application, and one detail is that "acknowledgements" are crucial. I'll get to what that means. I'm trying to figure out what the best exchange protocol would be.
Scenario:
Alice sends Bob a message. Bob is offline, so the message is stored on the server. Bob connects to the server through a WebSocket connection. The server sends him messages that have been sent to him while he was away. This is where the problem arises. The WS API that's available for my app's ecosystem (Node.js, Nest.js specifically), has no pattern where it can wait for this message to be sent. The mechanism there seems to just be fire & forget. What if the payload is quite large and the connection drops while the message is being sent?
Now, I know socket.io has support for acknowledgements. But from what I've read, socket.io has some overhead and therefore less performance than optimal. Now whether that performance is something that I arguably need is another question, but I'm just trying to figure out how I can guarantee that the message has arrived on the other end. This means client-server and server-client directions. How can I await it? I know that one approach is to attach a unique ID to the socket event, and have the other side send you a confirmation that it received it. This is how socket.io does it if I'm not mistaken.
But my question there is how can I guarantee that the acknowledgement message was successfully sent? So then I'd need an "ack" for my "ack" and so on, so I'll always need one more acknowledgement so I don't know how that works.
What I though of as options is to use two REST endpoints to send and receive (or download) messages. You send when you send, but you receive when you receive a ping that there's messages for you to download. Now this could be done through a WebSocket connection where the server notifies the client about a new message and then the client can call this receive endpoint. This ping can also be done through a more managed solution like FCM. The pros with that approach are twofold:
First, I have the REST interface to use, which is a lot more practical
I have the Request-Response pattern to use, so I have a theoretical guarantee that things are arriving if I get a response
Now the problem with this approach is that there's a lot of overhead from opening a new HTTP connection every time I want to send or receive messages, if I'm not mistaken:
I have to wait for the initial request time to get to the server before I actually have to wait for the server to respond with messages. With the pure WebSockets case, I would theoretically then just wait for the response equivalent part there (?)
This wastes bandwidth as well.
So one more question, where can I find out which clients will actually re-use an existing HTTP connection like a WebSocket connection, if available and not create a new one? Do all clients do that? Is it only the browser? What about apps? Is it on the OS level?
So the final question is how do I solve this problem of "acknowledgements" and not waste time and bandwidth? Are any of my conclusion/questions wrong or uninformed, am I missing something?
Notes:
server is Node.js and client is Flutter
i know about the WAMP subprotocol, but for my ecosystem it doesn't have very reliable implementations
I'm not sure what your exact requirements or performance need,
but I did a project that also need reliable communication between client and server using websocket, the simplest I could think of was build request-response mechanism on top of websocket, and then build your application data on top of that.
here's high level overview how I implemented it:
implement request-response message using transaction to identify which response belongs to which request.
clients will have map storing transaction, when you send the message request wait for server to send a message response with the same transaction or wait
until timeout.
client wants to send message to server and construct the request as follow
{
"event": "sendMessage",
"type": "request",
"transaction": "<uuid/unique-value>",
"data": "<your-application-data>"
}
server parse the message and check that its a request with an event name sendMessage then call related function
server sends back response message to client
{
"event": "sendMessage",
"type": "response",
"transaction": "<uuid/unique-value>", // same unique value as in request
"data": "<your-application-data-result>"
}
because client has mapping which transaction belongs to which request, it is possible to match which request this response belongs to, if matched then complete the transaction
Warning: Please bear with me and I am fairly new with Gatling. So, apologies in advance. :P :)
I was going through the Loadrunner Asynchronous Calls Function - wb_reg_async_attributes, and I found that there are four different Asynchronous Conversation Patterns, which are:
Poll - The client polls the server periodically for information.
Long Poll - The client polls the server and waits for a response.
When the response arrives, another poll request is initiated.
Push -The client sends a request. The server response is to send updates
when there are changes to the requested information.
Cross-user - One user performs an activity that is reflected in another user's client. For example, user1 sends an email and user2 receives
notification.
Now, I have a requirement where I need to test Long-Polling using Gatling.
As far as I know, there are two ways in Gatling:
Poll
SSE
Please feel free to let me know in case I am wrong.
By using Polling function of Gatling, I am getting a Gateway Timeout Error. My theory is:
Gatling sends the request --> doesn't get a response --> Comes back with Gateway Timeout error.
Is there a way I can emulate Long Polling in Gatling? Please help me out in resolving this challenge.
Poll works in the similar fashion as LongPoll
I am currently testing a Diameter protocol receiving component using Seagull to send my Diameter messages.
I have realised I am having to manually kill the Seagull process as it is expecting a response back when the Diameter message has been received by the system under test and this is not something the system is set up to do.
before I look to change the way I send my messages to work around this issue I wanted to check if the standard process for Diameter protocol is to send a response on receipt of a message and therefore is this a requirement that has been missed during design.
Im not familiar with a Diameter interface that includes Request without answer and I doubt if such exist since the protocol includes a lot of parameters that support request/answer mechanism (r-bit, hop-by-hop,end-to-end, Session-Id AVP....) how ever there are dozens of interfaces of Diameter so please share the interface you work with (For example: Ro,Gy,Gx,S6a...)
Regarding your Seagull case:
Seagull can only send and does not have to receive. Check where you have "receive channel" in your scenario XML. This where Seagull waits for answer. Remove it and you have a Seagull that only sends.
Every correct Diameter negotiation starts with request (CER) and Answer (CEA). If you want to simulate a full correct flow your Seagull will have to wait for answers
We have the following scenario that we would like to solve using Apache Camel:
An asynchronous request arrives to an AMQP endpoint configured in Camel. This message contains a header property for a reply-to that should be used for the response. Camel must pass this message to another service using JMS and then route the response back to the reply-to queue from the AMQP request. This seems like a textbook example for using the InOut functionality in Camel but we have one problem: The reply from JMS service could take a long time, in some cases several days.
As I understand it, if we are using InOut it would mean that we would lock a thread to the the long running service. If we are unlucky, we could get several long running calls simultaneously and in the worst case scenario it could be that all threads are busy waiting for replies thus clogging the system.
What strategy should I use for solving the problem described above? At the moment, I have created to separate routes: One that listens to the AMQP endpoint and forwards the message to the JMS endpoint. The other route listens to the replyto-queue for the jms system and would be responsible for sending the reply back to the AMQP reply-to. The problem I have right now is how I should store the AMQP reply-to between these two routes and I am not sure this is a good solution overall for this problem.
Any tips or ideas on how to solve this problem would be greatly appreciated.
If you have to wait for more than a minute for reply, it's probably a good thing to treat the reply as async. and create separate request and response routes.
Since you mention several days, you might even want to survive an application restart (or even backup-restore) to correlate the response. In such cases, you need to store correlation information in a persistent store such as a database or a JMS queue using message properties - and selectors to retrieve the correlation information back.
I've used both queues and databases for long time request/reply correlation information with success.
It's always a good practice to be able to fail over/restart the server or the application at any time knowing that any ongoing processing will take up where it left off without errors.
There is a cost in complexity and performance, but robustness is often perferred to performance.