so I know my Twilio account has sent upwards of 50,000 texts. However, when I run the following:
for item in client.messages.stream():
i+=1
lst.append([item.body.replace('|',''),item.from_,item.date_sent])
if i % 100 == 0:
print(i)
It just keeps running and running. I was originally using client.messages.list, but that hit my 1 minute, then 5 minute, then 10 minute timeout in lambda, so I decided to debug locally, and run the above. I stopped it after it had gotten to 230,000, which is many multiple more messages than we've actually sent.
I don't quite know why it's doing that? The docs don't say anything about this? I can't find in the docs either a way to tell the stream to move on, if what it's doing is just returning the same page over and over.
It doesn't appear to be returning the same page, though - when I print the message body for the hundredth one it changes every so often.
https://www.twilio.com/docs/libraries/reference/twilio-python/7.8.0/docs/source/_rst/twilio.rest.api.v2010.account.message.html#twilio.rest.api.v2010.account.message.MessageList
client.messages.stream(): will return all the messages you have sent as well as received.
You can filter the list by the number the number a message was sent from, if you are only looking to retrieve your outbound messages.
If you are looking to limit the results you are getting, you can set a limit. If you want to speed long requests like this up, you can increase the pageSize to a 1000 (default is 50).
Related
Is there way to configure pull subscription in the way that messages which caused error and were nacked, were re-queued (and so that redelivered) no more than n times?
Ideally on the last processing if it also failed I would like to handle this case (for example, log that this message is given up to process and will be dropped).
Or probably it's possible to find out, how much times received message was tried to be processed before?
I use node.js. I can see a lot of different options in the source code by am not sure how should I achieve desired behaviour.
Cloud Pub/Sub supports Dead Letter Queues that can be used to drop nacked messages after a configurable number of retries.
Currently, there is no way in Google Cloud Pub/Sub to automatically drop messages that were redelivered some designated number of times. The message will stop being delivered once the retention deadline has passed for that message (by default, seven days). Likewise, Pub/Sub does not keep track of or report the number of times a message was delivered.
If you want to handle these kinds of messages, you'd need to maintain a persistent storage keyed by message ID that you could use to keep track of the delivery count. If the delivery count exceeds your desired threshold, you could write the message to a separate topic that you use as a dead letter queue and then acknowledge original message.
Based on this question, it seems like writing to Azure DocDB output binding in Azure Function will be retried 10 times if throttled (HTTP 429). I haven't verified this myself though.
I would like to increase this limit on the number of retries. My data comes in big chunks in a small amount of time and then with a very long period of downtime, which means that getting 429 and waiting for a bit is okay for my purpose. I must guarantee though, that no data is dropped.
One way for me to solve this is to increase the RTU limit in Document DB to make sure I don't get 429 during the time big chunks of data come in, but it's already at about 2.5 times of what I need during downtime period. Is there anyway to make the retries run infinitely until it succeeds, or less ideally, increase the number of retries to something more than 10?
Why don't you change the approach and instead of inserting documents right away you can make use of service bus and implement a dead letter queue, here are some links:
https://learn.microsoft.com/en-us/azure/service-bus-messaging/service-bus-dead-letter-queues
https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-service-bus
https://blog.jeroenmaes.eu/2017/01/process-service-bus-dead-letter-message-with-azure-functions/
The idea is having something like this:
Current function instead of saving the data in DocumentDB, it will be sending it the the service bus (you just change the output binding)
Another function will process every message of the service bus and if it failed (you can manage a timeout in the function and then move the message to a dead letter queue)
Another function that will process any message in the dead letter queue
You just need to make a small change in the first function and create two more, might sound too complicated but you'll have strong consistency in the data. In all of the above links there's an example of what I mentioned here.
Recently I am heavily dealing with Docusign Api. Especially Bulk Send Rest api method since we have requirements to send 30K envelope in 3-4 hours. Given the Api Rule Limits, led us to leverage bulk send feature.
Since Bulk send has some limitation, like it has its own queue mechanism where queue size can not exceed 2000, I am implementing my solution by respecting to this limit.
To do that, I divided my bulk recipient file (30 K recipient) into 30 CSV file.
Then I initiate for each loop and inside the loop I am controlling queue size if the queued item count became 0 for the batch. During my many tests even though all email has been reached to my inbox, I have never seen queued property to become 0. If it would become 0, then I would send the next batch. But I could never do that
Below is the screenshot I took from ApiExplorer.
If I look deeper for Trial 3 to see what are those 24 queued items as seen below.
I am getting following response.
As you can see from the latest screenshot, even though queued property indicates that there are some pending items, resultSetSize property shows 0 although I just queried queued items.
For this reason I am not able to build my logic based on sent, queued, failed property value. I thought, I could rely on them to successfully build my logic. If not, how can I overcome this problem ? Any help would be appreciated.
Thank you in advance
I am looking for the following feature in Gmail.
For each message I open, it tracks the time I spent reading the message when it is feasible to do so. For example, if I open message 1 and then move to message 2, by clicking a button within 2 seconds, it notes that the time spent on message 1 is less than 2 seconds.
Gmail automatically labels the messages on which the User spends less than some configurable amount of time (say 2 seconds) and assigns them a label, say "LowAttentionSpan". This way the user can periodically look for messages with this label and take actions like unsubscribing from a list to minimize the amount of time spent on the Inbox.
Is such a feature already available now or can it be developed using Gmail API?
I believe this feature is not yet available for Gmail. Referencing the documentation, there are no such labels similar to what you are looking for nor can you customize to have such labels.
As gerardnimo said, there is currently no such feature available for Gmail. An approximate solution using the Gmail API comes to mind though:
Subscribe to push notifications and issue a watch on the UNREAD-label.
Every time you get a push notification related to a certain user, it will mean that the user just started reading a mail (or marked an old mail as UNREAD). Check the difference in time since last time you got a notification for the same user. If the difference was less than LowAttentionSpan seconds, you could add a custom label to it.
This simple solution has some caveats though.
If the user marks an old message as unread, it might cause some unwanted behavior.
Also, if the user reads only one mail, and comes back e.g. three hours later to read another one, the solution above will interpret that as the user read the first mail for three hours, which will not be the case. It will in other words just work when the user reads multiple new mails in succession.
I'm setting up a message queue using ServiceStack-v3 that looks like this
ClaimImport -> Validation -> Success
I've added hundreds of ClaimImports with no problem, the .inq count is correct. The issue is I want to see how many claims were imported by checking the ClaimsImport.outq. It never seems to go past 101. Is there some other way I could check this, or is this max limit intentional?
This is the default limit added on RedisMessageQueueClient.MaxSuccessQueueSize. The purpose of the .outq is to be a rolling log of recently processed messages.
Clients can subscribe to the QueueNames.TopicOut to get notified when a message is published to the .outq.