Where should I place input/output console for server? - linux

I'm developing a simple 2d online game and now I'm designing my server. The server will be run on linux vps and I need a way to communicate with it (for example to close it, and as it will be run on vps, simply closing terminal won't work). So I think there are 2 options:
1) Write 2 apllications - server which doesn't say anything and doesn't accept console input and the second application is console which sends commands to server (like exit, get online players etc).
2) Write 1 application which has 2 threads - one is the real server, the second thread will be used for cin and cout. However I'm not sure if this will work on vps...
Or maybe there is better aproach? What is the usual way of doing this?
Remember that it must be vps-compatible way (only ssh access to it).
Thanks

I would go for a "daemon" (server) for the main server function and then use a secondary application that can connect to the server and send it commands.
Or just use regular signals, like most other servers do - when you reconfigure your Apache server, for example, you send it a SIGHUP signal that restarts the server. That way, you don't need a second application at all - just "kill -SIGHUP your_server_pid".

Related

What happens if i didn't use NGINX with uWSGI or Gunicorn?

Can someone brief me on what happens if I didn't use any webserver(NGINX) in front of my Application server (uWSGI or GUNICORN)?
My requirement is exposing a simple python script as a web-service. I don't have any static content to render. In that scenario can I go without NGINX?
Brief me what are the issues I will face if I go with plain app server? Max requests per second would be some 50 to 80(This is the upper limit).
Thanks, Vijay
If your script acts like a webserver then it is a webserver and you don't need any layer on top of it.
You have to make sure though it acts like one:
listens for connections
handles them concurrently
wakes up upon server restart, etc…
Also:
handles internal connections correctly (eg. to the database)
doesn't leak memory
doesn't die upon an exception
Having a http server in front of a script has one great benefit: the script executes and simply dies. No problem with memory handling and so on… imagine your script becomes unresponsive, ask yourself what then…

tinylr/nodejs - how to access the currently running server

In the parent process, I have started the tiny-lr(livereload) server, followed by spawing a child process which looks for changes to the css files. how to pass on the livereload server to the child process or is it possible to query for the livereload server that is currently running in the child process so that I don't create it again getting an already in use error for the port.
the same case with node http server. can I know if the server is already running and use that instead of creating new one.
is it possible to query for the livereload - it is possible and may be implemented in more than one way.
Use stdout/stdin to communicate with the child process. For detailed description look HERE. Basically you can send messages from one process to the other and reply to them.
Use http.request to check if the port is in use.
You can use a file: the process with the server keeps the file open in the write mode - the content of the file stores the port on which the server runs (if needed).
You can use sockets for inter-process communication, as well.
Basically, none of the above guarantees 100% confidentiality, so you have to try/catch for errors anyway: the server may die just after your check, but before you wanted to do something with it.
how to pass on the livereload server to the child process - if you mean sharing an object between different process that it is for sure out of question; if you mean changing the ownership of the object that I am some 99,99% sure it is not possible neither.
What is the problem with having just one process responsible for running the server? And why not to use, let say, forever to take care of running and restarting the server, if needed?

restart nodejs server programmatically

User case:
My nodejs server start with a configuration wizard that allow user to change the port and scheme. Even more, update the express routes
Question:
Is it possible to apply the such kind of configuration changes on the fly? restart the server can definitely bring all the changes online but i'm not sure how to trigger it from code.
Changing core configuration on the fly is rarely practiced. Node.js and most http frameworks do not support it neither at this point.
Modifying configuration and then restarting the server is completley valid solution and I suggest you to use it.
To restart server programatically you have to execute logics outside of the node.js, so that this process can continue once node.js process is killed. Granted you are running node.js server on Linux, the Bash script sounds like the best tool available for you.
Implementation will look something like this:
Client presses a switch somewhere on your site powered by node.js
Node.js then executes some JavaScript code which instructs your OS to execute some bash script, lets say it is script.sh
script.sh restarts node.js
Done
If any of the steps is difficult, ask about it. Though step 1 is something you are likely handling yourself already.
I know this question was asked a long time ago but since I ran into this problem I will share what I ended up doing.
For my problem I needed to restart the server since the user is allowed to change the port on their website. What I ended up doing is wrapping the whole server creation (https.createServer/server.listen) into a function called startServer(port). I would call this function at the end of the file with a default port. The user would change port by accessing endpoint /changePort?port=3000. That endpoint would call another function called restartServer(server,res,port) which would then call the startServer(port) with the new port then redirect user to that new site with the new port.
Much better than restarting the whole nodejs process.

Sending and performing commands from node.js to bash

I'm developing a sort of Flash Operator Pannel for Asterisk but, with Node.js and Socket.io instead of depending of Flash.
I've polished the node server and the front end BUT I don't know how could I send events from Asterisk to node server and do things that will be sended over the socket.
Given the fact that we have a heavily tuned Asterisk to suit our company needs, connecting to the AMI nor the Asterisk socket will solve my problem because we aren't working with real extensions.
So, despite the Asterisk part, I want to know how could I send info to node through bash or curls or whatever
I thought about using curls to the server but this could cause that someone who knows the commands (pretty unlikely) could alter the application flow with unreal data.
EDIT: Rethinking about it, I would just want to be able to receive requests through the socket/server ??? and then be able to perform actions that will be emited through socket.io.
Is that even possible?
The answer really depends upon what specific data you are trying to get from Asterisk to Node. You're trying to replace the Flash Operator Panel, yet you don't have real extensions. I'm guessing that you are using Asterisk as an SBC/proxy of sorts.
If you truly want an event-driven approach, I suggest modifying your dialplan to reach out to Node whenever needed, with whatever data you want. This would most easily be achieved by calling an AGI script with some number of arguments (written in whatever language) that then connects to Node via an HTTP POST, socket, or other.
If you want a more passive approach, you could have Node stream-read the asterisk log files for data, or, as already suggested, connect to the Asterisk Manager Interface (AMI) and stream from there. Contrary to what has been stated previously, I don't consider this to be a very daunting task.
You want to open a socket from Node to Asterisk's AMI (asterisk manager interface). I never used Node, but I would imagine the code would look roughly like this:
var astman = new net.socket().connect(5038);//connect to port 5039 on localhost
astman.on('data', function(data) {
//do something with received data
});
One of the most well maintained ami libraries are FreePBX's php-astmanager. While it's written in php, it should give you a pretty good idea of what your need to do.
You could certainly set up your node.js program to listen on a socket for messages from Asterisk. But you'd have to roll your own connection management scheme, authentication scheme, message durability (possibly), etc.
Alternatively -- and especially if there is the node server and asterisk server are not on the same machine -- you could use a message queue program like RabbitMQ. That takes care of a lot of the important details involved in interprocess communications. It's pretty easy, too. On the node side, check out https://github.com/postwait/node-amqp
I've never used Asterisk but running command line programs can be done with the child_process module.
http://nodejs.org/docs/latest/api/child_processes.html

Load Balancing of Process in 1 Server

I have 1 process that receives incoming connection from port 1000 in 1 linux server. However, 1 process is not fast enough to handle all the incoming request.
I want to run multiple processes in the server but with 1 end-point. In this way, the client will only see 1 end-point/process not multiple.
I have checked LVS and other Load Balancing Solution. Those solutions seem geared towards multiple servers load-balancing.
Any other solution to help on my case?
i am looking something more like nginx where i will need to run multiple copies of my app.
Let me try it out.
Thanks for the help.
You also may want to go with a web server like nginx. It can load balance your app against multiple ports of the same app, and is commonly used to load balance Ruby on Rails apps (which are single threaded). The downside is that you need to run multiple copies of your app (one on each port) for this load balancing to work.
The question is a little unclear to me, but I suspect the answer you are looking for is to have a single process accepting tasks from the network, and then forking off 'worker processes' to actually perform the work (before returning the result to the user).
In that way, the work which is being done does not block the acceptance of more requests.
As you point out, the term load balancing carries the implication of multiple servers - what you want to look for is information about how to write a linux network daemon.
The two kes system calls you'll want to look at are called fork and exec.
It sounds like you just need to integrate your server with xinetd.
This is a server that listens on predefined ports (that you control through config) and forks off processes to handle the actual communication on that port.
You need multi-processing or multi-threading. You aren't specific on the details of the server, so I can't give you advice on what to do exactly. fork and exec as Matt suggested can be a solution, but really: what kind of protocol/server are we talking about?
i am thinking to run multiple application similar to ypops.
nginx is great but if you don't fancy a whole new web server, apache 2.2 with mod proxy balancer will do the same job
Perhaps you can modify your client to round-robin ports (say) 1000-1009 and run 10 copies of the process?
Alternatively there must be some way of internally refactoring it.
It's possible for several processes to listen to the same socket at once by having it opened before calling fork(), but (if it's a TCP socket) once accept() is called the resulting socket then belongs to whichever process successfully accepted the connection.
So essentially you could use:
Prefork, where you open the socket, fork a specified number of children which then share the load
Post-fork, where you have one master process which accepts all the connections and forks children to handle individual sockets
Threads - you can share the sockets in whatever way you like with those, as the file descriptors are not cloned, they're just available to any thread.

Resources