How to get message info by ID [Telegram API] - bots

I'm writing bot for telegram to gather some stats from group chat. I need to get info about every message (from the beginning of chat). I know how can i do it, but it's a quite bad idea. I can use forwardMessage method, but i need second acc for it and i'm getting timeouted when i'm sending messages too fast (for one hour), so it's a bit long way to collect stats for conversation that has over 2 million messages ;s I tried to set limit on 10 messages per second but i'm still getting timeouted, so idk how it works.
There must be other way to get JUST message info by id without forwarding it ;v I can't find it in API.

There has no API to do this at this time, you can suggest this idea to #BotSupport, before them added this feature, I am doing same thing like you.
According to Bot FAQ, Telegram API rate limit 1/s pre chat, and global limit is 30/s.

There is no way to do this with Telegram bot api, you can use ReadHistory Method of MadelineProto without the necessity to use forward message method

Related

How to remove all/single event listners in socket.io

I am using socket.io for realtime functionality in my app with hapijs. When I am trying to add a listner on server side in a hapijs route and if I reload the same route/page 10 times or more then it starts showing me an error (node:9004) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 board_5a40a863a7fbf12cf8a7f1b8_element_sorted listeners added. Use emitter.setMaxListeners() to increase limit. you can also see error in attached screenshot.
I tried each of following codes to removeListners first and then add it back using socket.on('eventname', callback):
io.sockets.removeListner("eventname")
io.removeListner("eventname")
socket.removeListner("eventname")
io.sockets.removeAllListners()
io.removeAllListners()
socket.removeAllListners()
But I got an error everytime that removeAllListners/removeListner is not a function.
I also tried setting limit of maxlistners to unlimited by using each of following codes:
io.setMaxListeners(0);
io.sockets.setMaxListeners(0);
socket.setMaxListeners(0);
But I still kept getting the same error of memory leak detected. So can somebody tell me the solution for this. I would preferably like to follow the approach of removing the event listeners first and then setting it back. But I don't know which function do I need to call. :(
Also I want to know one more thing, is it a good approach create a new and unique event listener for every user rather than creating a common event listener of all the users?
For example I have a chat app with 1 million users.
Now in first approach I will have to create 1 million event listeners for 1 million users. So whenever there is a new message from a user then only those users will get the ping from the server who are chatting with that user.
In second approach I will have to create 1 common event listener for all users, but now server will have to ping 1 million users now and I will have to parse the message received every time on client end and check whether the message is for me or for somebody else.
According to me 2nd approach should not be a good approach because of security issues as there are chances of a message being received by a wrong/unauthorized user.
But still I am not sure which one to follow.
So, can anyone guide me on this thing well. Any help is appreciated.

Kik bot is triggering "TooManyRequests" error in BotFramework

I recently deployed a bot using Azure and BotFramework to Skype, Slack, Telegram and some other platforms.
They all seem to work fine, except in Kik, where the bot will suddenly stop responding. The error message in BotFramework reads:
{"message":"Too many requests for user: 'redacted_user_name'","error":"TooManyRequests"}
The Kik tester is triggering this error through regular use, though when I test it on my (Android) phone, it works just fine.
Any idea what might be causing this?
EDIT:
After contacting Kik, I was told that my Bot was sending more messages than it was recieving, and they only allow a surplus of 20 before a bot becomes banned.
They say the solution is to implement batching, which BotBuilder says is built in. (My bot uses session.send("text") followed by a prompt.) However, Kik does not see my messages as a batch, and every couplet is counting as 2 messages.
I tried adjusting the autoBatchDelay to see if 0 would work better than the default and noticed that it did not make a difference. Furthermore, changing it to 2000 also made no difference, and did not delay 2000ms between messages.
var bot = new builder.UniversalBot(connector, {autoBatchDelay: 0});
Is it possible my bot is not batching properly? What steps could I take to address this?
Batching for Kik is currently on our backlog. In the mean time, is there any reason you can't send your text and prompt in the same message (with carriage returns in between if needed)? That should resolve your issue (as I understand it).
Also worth noting that the Kik rules for recovering from a throttling deficit are somewhat complex.
• In any given send message request, a bot can send up to 25 messages in a single POST request. Within the 25 messages, a bot is allowed to have up to 5 messages directed to a single user.
• Whether you send 1 message or 5 messages, that collection of requests is considered a “batch” of messages to a user.
• A bot is allowed 20 unsolicited batches to a user a day.
• This means you could be sending between 20-100 unsolicited messages to a user a day depending on how many messages you have in a batch. How the bot platform determines unsolicitation works like a debit/credit system that resets at the end of a day. e.g. Julie sends the bot a message, the balance becomes +1. The bot responds with 3 messages in one batch, the balance becomes 0. Julie sends the bot 1 message, the balance becomes +1. The bot responds with 5 messages in separate batches, the balance becomes -4. Julie sends the bot a message, the balance becomes +1. The bot responds with 5 messages in separate batches, the balance becomes -9.
• If this deficit continues to -20, the daily user rate limit will have reached, and the bot will NOT be able to send any more messages to that user. There are different methods to work with this rate limit, e.g. using batches more efficiently or building a UX that encourages more user interactivity.

GCM message to all users (without topics)

I have the following dilemma:
I need to send a heartbeat message every 5 minutes (or less) to all users of my app
I thought about topic messaging, but the 1 million subscriber limit is not acceptable for my application
So: the only possibility left is sending out the message in batches of 1000
This is really resource intensive
Now my question:
How can I make this process of batching and sending really efficient? Is there a good solution already made, preferably in node.js?
Thank you,
Sebastian
You may use XMPP, instead of HTTP.
As google says, it is less resource intensive in respect to HTTP:
The asynchronous nature of XMPP allows you to send more messages with
fewer resources.
Also you can have 1000 similtanouis connection per app (sender ID):
For each sender ID, GCM allows 1000 connections in parallel.
Also there exists a node-xmpp solution available for that.

Pusher API - Messages Per Minute

I implemented Pusher API in a live chat recently.
I launched a Startup package of Pusher yesterday. After 4 hours of being live, I receive an email that my account is reaching the cap on usage.
I logged in and looked at the stats, to discover that the Messages per Minute were between 5,000 and 20,000.
I don't understand how this is possible. I have around 100-150 connections open.
Why is the message count so high?
Armin
Found the answer myself! :)
Here is the link for anyone who may have the same problem:
https://pusher.tenderapp.com/kb/accountsbillingplanspricing/how-is-my-message-count-calculated
Basically, if you have 100 users subscribed into a channel, and 1 message is sent, it counts are 100 messages being sent since each user would have to be notified.
Bottom line is to properly filter your channels.

Distributed pub/sub with single consumer per message type

I have no clue if it's better to ask this here, or over on Programmers.SE, so if I have this wrong, please migrate.
First, a bit about what I'm trying to implement. I have a node.js application that takes messages from one source (a socket.io client), and then does processing on the message, which might result in zero or more messages back out, either to the sender, or other clients within that group.
For the processing, I would like to essentially just shove the message into a queue, then it works its way through various message processors that might kick off their own items, and eventually, the bit running socket.io is informed "Hey, send this message back"
As a concrete example, say a user signs into the service, that sign in message is then placed in the queue, where the authorization processor gets it, does it's thing, then places a message back in the queue saying the client's been authorized. This goes back to the socket.io socket that is connected to the client, along with other clients that might be interested. It can also go to other subsystems that might want to do more processing on authorization (looking up user info, sending more info to the client based on their data, etc).
If I wanted strong coupling, this would be easy, but I tried that before, and it just goes to a mess of spaghetti code that's very fragile, and I would like to avoid that. Another wrench in the setup is this should be cluster-able, which is where the real problem comes in. There might be more than one, say, authorization processor running. But the authorization message should be processed only once.
So, in short, I'm looking for a pattern/technique that will allow me to, essentially, have multiple "groups" of subscribers for a message, and the message will be processed only once per group.
I thought about maybe having each instance of a processor generate a unique name that would be used as a list in Reids. This name would then be registered with some sort of dispatch handler, and placed into a set for that group of subscribers. Then when a message arrives, the dispatch pulls a random member out of that set, and places it into that list. While it seems like this would work, it seems somewhat over-complicated and fragile.
The core problem is I've never designed a system like this, so I'm not even sure the proper terms to use or look up. If anyone can point me in the right direction for this, I would be most appreciative.
I think what your describing is similar to https://www.getbridge.com/ service. I it but ended up writing my own based on zeromq, it allows you to register services, req -> <- rec and channels which are pub / sub workers.
As for the design, I used a client -> broker -> services & channels which are all plug and play using auto discovery, you have the services register their schema with the brokers who open a tcp connection so that brokers on other servers can communicate with that broker groups services. Then internal services and clients connect via unix sockets or ipc channels which ever is preferred.
I ended up wrapping around the redis publish/subscribe functions a bit to do this. Each type of message processor gets a "group name", and there can be multiple instances of the processor within that group (so multiple instances of the program can run for clustering).
When publishing a message, I generate an incremental ID, then store the message in a string key with that ID, then publish the message ID.
On the receiving end, the first thing the subscriber does is attempt to add the message ID it just got from the publisher into a set of received messages for that group with sadd. If sadd returns 0, the message has already been grabbed by another instance, and it just returns. If it returns 1, the full message is pulled out of the string key and sent to the listener.
Of course, this relies on redis being single threaded, which I imagine will continue to be the case.
What you might be looking for is an AMQP protocol implementation,where you can have queue get custom exchanges,and implement a pub-sub model.
RabbitMQ - a popular amqp protocol implementation with lots of libraries
it also has node.js library

Resources