Discord.py differentiating between two different servers - python-3.x

I would like to ask about deploying a bot in 2 different servers. Is there any way to differentiate the two bot instances? For example, I would like to run a bot for serverA and another instance of the same bot in serverB (testing server). I would like to have it so that the bot would respond to a command like "!ping" only once in serverB when testing even if serverA and serverB both have running instances of my bot (meaning there are 2 different running scripts of the same code). My dilemma is that if I have two of the same code running in 2 different terminals, when I try to call for !ping command in serverB, it performs the action twice since there are 2 instances.

Running a bot in 2 terminals is not the same as having your bot join 2 servers. You'll only have 1 instance of the bot running but it's basically listening to both servers.
Differentiating between different servers is generally not needed because the bot will be responding in the context of the server it got a command from. However, if you want to keep track of certain data with files you can do so by including the server name in the file name. This way you can store data for each server separately.

Related

Setup secondary redundant failover Node.JS server

Introduction
So I made a discord bot and hosted it on Heroku server. The bot is basic, it just listens to new messages and responds to them.
I am using a free tier of heroku and discovered that there are so called "free dyno hours". When it is end of the month, all the free hours are consumed and my bot will go offline until next month.
I own a Raspberry pi and so I thought it would be a good idea to host it there as well in case Heroku goes offline.
What I want to achieve
Host discord bot on hosting platform(Heroku).
use my Raspberry pi as a failover server.
Ensure that only one instance of the bot is running at a time.
If both servers are online Heroku(chosen as the main server) has priority.
If the application on Heroku somehow crashes or goes offline the bot on my Raspberry gets activated.
And vice versa.
How would that work if I had 3 and more servers?
How would I approach this if I had the bot connected to database?
This is just an example I would appreciate if more general solution would be provided.
NginX(load balancer)
When searching for help, I came across load balancers and NginX.
If I am right, it basically functions like a gateway, all requests go to the server NginX is hosted on and then they get redirected to one of many servers(depending on how loaded they are etc.).
I have few problems with this:
my bot doesn't receive any requests, it just listens to the discord api.
if the server with Nginx fails it all falls apart?
My solution 1
My first approach was to let the bot running on both Heroku and Raspberry at the same time.
It guarantees that the bot runs on at least one server if the other one fails.
But it is not ideal as the bot responds 2 times when both servers are up.
My solution 2
Heroku:
I added a simple Rest api endpoint with express.js to the bot app.
This way I can check if the bot on Heroku is running.
Raspberry pi:
I wrapped the bot app with this Node program that basically functions like a switch.
Link to a git repo: https://github.com/Matyanson/secondary-node-server.git
Every 3 minutes the program calls the endpoint and then by looking at the status code it checks whether the app is down.
Then it either starts the app or shuts down the app on raspberry(using pm2).
This kinda works but I am sure there is a better solution!
There is also a problem with Heroku(you can skip this):
Heroku uses Dynos(containers) to run the apps. There are different configurations of dynos including 'Web' and 'Worker'.
Worker is used for background job and never is never put to sleep.
Web dyno is the only dyno to receive HTTP traffic. It is put to sleep after 30min of receiving no web traffic.
I can switch on/off either of these.
both: there are 2 instances of my bot.
Worker: Can't connect to the endpoint.
Web: have to 'ping' the api at least every 30min or the bot sleeps.
Why am I asking?
Not because I need to solve this exact problem but because I want to learn a good way of doing this to use it in the future projects.
I also think this would be a good way to not 100% rely on external Hosting providers.

nodejs local agent functionality

I have a website hosted on Heroku and Firebase (front (react) and backend(nodejs)) and I have some "long running scripts" that I need to perform. I had the idea to deploy a node process to my raspberry pi to execute this (because I need resources from inside my network).
How would I set this up securely?
I think I need to create a nodejs process that checks the central server regularly if there are any jobs to be done. Can I use sockets for this? What technology would you guys use?
I think the design would be:
1. Local agent starts and connects to server
2. Server sends messages to agent, or local agent polls with time interval
EDIT: I have multiple users that I would like to serve. The user should be able to "download" the agent and set it up so that it connects to the remote server.
You could just use firebase for this right? Create a new firebase db for "tasks" or whatever that is only accessible for you. When the central server (whatever that is) determines there's a job to be done, it adds it to your tasks db.
Then you write a simple node app you can run on your raspberry pi that starts up, authenticates with firebase, and listens for updates on your tasks database. When one is added, it runs your long running task, then removes that task from the database.
Wrap it up in a bash script that'll automatically run it again if it crashes, and you've got a super simple pubsub setup without needing to expose anything on your local network.

Where should I place input/output console for server?

I'm developing a simple 2d online game and now I'm designing my server. The server will be run on linux vps and I need a way to communicate with it (for example to close it, and as it will be run on vps, simply closing terminal won't work). So I think there are 2 options:
1) Write 2 apllications - server which doesn't say anything and doesn't accept console input and the second application is console which sends commands to server (like exit, get online players etc).
2) Write 1 application which has 2 threads - one is the real server, the second thread will be used for cin and cout. However I'm not sure if this will work on vps...
Or maybe there is better aproach? What is the usual way of doing this?
Remember that it must be vps-compatible way (only ssh access to it).
Thanks
I would go for a "daemon" (server) for the main server function and then use a secondary application that can connect to the server and send it commands.
Or just use regular signals, like most other servers do - when you reconfigure your Apache server, for example, you send it a SIGHUP signal that restarts the server. That way, you don't need a second application at all - just "kill -SIGHUP your_server_pid".

Node.js tcp socket server on multiple machines

I have a node.js tcp server that is used as a backend to an iPhone chat client. Since my implementation includes private group chats I store a list of users and what chat room they belong to in memory in order to route messages appropriately. This all works for fine assuming my chat server will always be on one machine, but when/if I need to scale horizontally I need a good way of broadcasting messages to clients that connect to different servers. I don't want to start doing inter-process communication between node servers and would prefer sharing state with redis.
I have a few ideas but I'm wondering if anyone has a good solution for this? To be clear here is an example:
User 1 connects to server 1 on room X, user 2 connects to server 2 on room X. User 1 sends a message, I need this to be passed to user 2, but since I am using an in memory data structure the servers don't share state. I want my node servers to remain as dumb as possible so I can just add/remove to the needs of my system.
Thanks :)
You could use a messaging layer (using something like pub/sub) that spans the processes:
Message Queue
-------------------------------------------------------------------------------
| |
ServerA ServerB
------- -------
Room 1: User1, User2 Room 1: User3, User5
Room 2: User4, User7, User11 Room 2: User6, User8
Room 3: User9, User13 Room 3: User10, User12, User14
Let's say User1 sends a chat message. ServerA sends a message on the message queue that says "User1 in Room 1 said something" (along with whatever they said). Each of your other server processes listens for such events, so, in this example, ServerB will see that it needs to distribute the message from User1 to all users in its own Room 1. You can scale to many processes in this way--each new process just needs to make sure they listen to appropriate messages on the queue.
Redis has pub/sub functionality that you may be able to use for this if you're already using Redis. Additionaly, there are other third-party tools for this kind of thing, like ZeroMQ; see also this question.
Redis is supposed to have built in cluster support in the near future, in the mean time you can use a consistent hashing algorithm to distribute your keys evenly across multiple servers. Someone out there has a hashing module for node.js, which was written specifically to implement consistent hashing for a redis cluster module for node.js. You might want to key off the 'room' name to ensure that all data points for a room wind up on the same host. With this type of setup all the logic for which server to use remains on the client, so your redis cluster can basically remain the same and you can easily add or remove hosts.
Update
I found the consistent hashing implementation for redis I was talking about, it gives the code of course, and also explains sharding in an easy to digest way.
http://ngchi.wordpress.com/2010/08/23/towards-auto-sharding-in-your-node-js-app/

How to get Node.js processes communicate with one another

I have an nodejs chat app where multiple clients connect to a common chat room using socketio. I want to scale this to multiple node processes, possibly on different machines. However, clients that connect to the same room will not be guaranteed to hit the same node process. For example user 1 will hit node process A and user 2 will hit node process B. They are in the same room so if user 1 sends a message, user 2 should get it. What's the best way to make this happen since their connections are managed by different processes?
I thought about just having the node processes connect to redis. This at least solves the problem that process A will know there's another user, user 2, in the room but it still can't send to user 2 because process B controls that connection. Is there a way to register a "value changed" callback for redis?
I'm in a server environment where I can't control any of the routing or load balancing.
Both node.js processes can be subscribed to some channel through redis pub/sub and listen to messages which you pass to this channel. For example, when user 1 connects to process A on the first machine, you can store in redis information about this user along with the information which process on which machine manages it. Then when user 2, which is connected to process B on the second machine, sends a message to user 1, you can publish it to this channel and check which process on which machine is responsible for managing communication with user 1 and respond accordingly.
I have done(did) some research on this. Below my findings:
Like yojimbo87 said you first just use redis pub/sub(is very optimized).
http://comments.gmane.org/gmane.comp.lang.javascript.nodejs/22348
Tim Caswell wrote:
It's been my experience that the bottleneck is the serialization and
de-serialization of the data, not the actual channel. I'm pretty sure
you can use named pipes, but I'm not sure what the API is. msgpack
seems like a good format for the data interchange. There are a few
libraries out there that implement msgpack or ipc frameworks on top of
it.
But when serialization / deserialization becomes your bottle-neck I would try to use https://github.com/pgriess/node-msgpack. I would also like to test this out, because I think the sooner you have this the better?

Resources