I am making a server application in Kotlin, and the server does following things:
Bind a ServerSocket port let's say 10001.
This port accepts TCP connection from clients (Users). Thread used. Works now as intended.
It also opens and Binds a local port 10002 from localhost only.
This port allows external application in local host to connect, and communicate as manager thread.
It initiate a remote connection to another server in UDP, translates TCP data from port 10001 to UDP by restructuring the data pack and visa-versa.
This thread is being created by the thread running port 10001 connection on-demand above at #1.
Now, we have 3 connections as shown below (Manager & User connections are two different Threads):
Manager(10002) --> | Kotlin | --> Remote Server
User(10001) <-----> | Server | <-- (UDP Port)
So, I want to send some commands from Manager Thread to User Thread buy specifying certain tread identifier, and that will initiate a code block in User thread to send some JSON data to user terminal.
And one of the command from Manager thread will start a remote server connection(UDP, assume a thread too) to communicate and translate data between User Thread and the remote server connection.
So in this case, how do I manage the communication between threads especially between Manager and User thread?
I have been able to create treads to accept User side connections, and it works fine now.
val socketListener_User = ServerSocket(10001)
socketListener_User.use {
while (true) {
val socket_User = socketListener_User.accept()
thread(start = true) {
/** SOME CODE HERE **/
/** THIS PART IS FOR USER THREAD **/
}
}
}
The user can send data at any time to Server as well as Manager. So the server shall be on standby for both side, and neither of them shall block each other.
It should be similar to instant messenger server, but usually IMs store data in external database and trigger the receiver to read, isn't it?
And now I believe there must be some way to establish communications channels between treads to do above tasks which I have not figured out.
After digging some docs, I ended up using cascaded MutableMaps to store the data needed to be shared between threads
In main class file, before main(), I declared vals for the data storage maps.
val msgToPeers : MutableMap<String, <Int, <String, String>>> = HashMap()
// Format: < To_ClientID, < Sequence_Num, < From_ClientID, Message_Body >>>
Next, in threads for each serversocket connection, create two sub threads
Sender Thread
Receiver Thread
In Sender Thread, construct the datamap, and add into msgToPeers by using
msgToPeers.set() or msgToPeers["$receiverClientID"] = .......
Then in Receiver Thread, run a loop to scan the map, and pick up whatever data you need only, and output to socketwriter.
Remember to use msgToPeers["receiverClientID"].remove(sequenceID) to empty the processed message before goes to next loop.
Oh, I also added a 50ms pause per loop as I needed to make sure that the sender threads to have enough time to queue the message list before the scanner takes the message and clear it.
Related
I know that you utilize a port to address a process and that you have to use sockets for handling multiple requests on web server, but how does it work? Is the process creating multiple socket threads for each connection? Is threading the answer?
Overview
This is a great question, and one that will take a bit to explain fully. I will step through different parts of this topic below. I personally learned multi-threading in Java, which has quite an extensive concurrency library. Although my examples will be in Java, the concepts will stand between languages.
Is threading valid?
In short, yes this is a perfect use case for multi-threading, although single-threaded is fine for simple scenarios as well. However, there does exist better designs that may yield better performance and safer code. The great thing is there are loads of examples on how to do this on the internet!
Multi-Threading
Lets investigate sample code from this article, seen below.
public class Server
{
public static void main(String[] args) throws IOException
{
// server is listening on port 5056
ServerSocket ss = new ServerSocket(5056);
// running infinite loop for getting
// client request
while (true)
{
Socket s = null;
try
{
// socket object to receive incoming client requests
s = ss.accept();
System.out.println("A new client is connected : " + s);
// obtaining input and out streams
DataInputStream dis = new DataInputStream(s.getInputStream());
DataOutputStream dos = new DataOutputStream(s.getOutputStream());
System.out.println("Assigning new thread for this client");
// create a new thread object
Thread t = new ClientHandler(s, dis, dos);
// Invoking the start() method
t.start();
}
catch (Exception e){
s.close();
e.printStackTrace();
}
}
}
}
The Server code is actually quite basic but still does the job well. Lets step through all the logic seen here:
The Server sets up on Socket 5056
The Server begins its infinite loop
The client blocks on ss.accept() until a client request is received on part 5056
The Server does relatively minimal operations (i.e. System.out logging, set up IO streams)
A Thread is created and assigned to this request
The Thread is started
The loop repeats
The mentality here is that the server acts as a dispatcher. Requests enter the server, and the server allocates workers (Threads) to complete the operations in parallel so that the server can wait for and assist the next, incoming request.
Pros
Simple, readable code
Operations in parallel allows for increased performance with proper synchronization
Cons
The dangers of multi-threading
The creation of threads is quite cumbersome and resource intensive, thus should not be a frequent operation
No re-use of threads
Must manually limit threads
Thread Pool
Lets investigate sample code from this article, seen below.
while(! isStopped()){
Socket clientSocket = null;
try {
clientSocket = this.serverSocket.accept();
} catch (IOException e) {
if(isStopped()) {
System.out.println("Server Stopped.") ;
break;
}
throw new RuntimeException("Error accepting client connection", e);
}
this.threadPool.execute(new WorkerRunnable(clientSocket,"Thread Pooled Server"));
}
Note, I excluded the setup because it is rather similar to the Multi-Threaded example. Lets step through the logic in this example.
The server waits for a request to arrive on its alloted port
The server sends the request to a handler that is given to the ThreadPool to run
The ThreadPool receives Runnable code, allocated a worker, and begin code execution in parallel
The loop repeats
The server again acts as a dispatcher; it listens for the request, receives one, and ships it to a ThreadPool. The ThreadPool abstracts the complex resource management from the developer and executes the code optimized fully. This is very similar to the multi-thread example, but all resource management is packaged into the ThreadPool. The code is reduced further from the above example, and it is much safer for non-multi-threading professionals. Note, the WorkerRunnable is only a Runnable, not a raw Thread, whilst the ClientHandler in the Multi-Thread example was a raw Thread.
Pros
Threads are managed and re-used by the pool
Further simplify code base
Inherits all benefits from the Multi-Threaded example
Cons
There is a learning curve to fully understanding pooling and different configurations of them
Notes
In Java, there exists another implementation called RMI, that attempts to abstract away the network, thus allowing the communication of Client-Server to happen as though it is on one JVM, even if it is on many. Although this as well can use multi-threading, it is another approach to the issue instead of sockets.
We created a Qt HTTP server derived from QTcpServer.
Each incoming client connection is handled in a new thread like this:
void WebClientThread::run()
{
// Configure the web client socket
m_socket = new QTcpSocket();
connect(m_socket, SIGNAL(disconnected()), this, SLOT(disconnected()));
connect(m_socket, SIGNAL (error(QAbstractSocket::SocketError)), this, SLOT(socketError(QAbstractSocket::SocketError)));
// Create the actual web client = worker
WebClient client(m_socket, m_configuration, m_pEventConnection, m_pThumbnailStreams, m_server, m_macAddress, 0 );
// Thread event loop
exec();
m_pLog->LOG(L_INFO, "Webclient thread finished");
}
//
// Client disconnect
//
void WebClientThread::disconnected()
{
m_socket->deleteLater();
exit(0);
}
This code works, but we experienced application crashes when it was executed while the NTP connection of our device kicked in and the system time was corrected from the epoch 01/01/1970 to the current time.
The crash could also be reproduced when running the application and meanwhile changing the system time from a script.
The application runs fine - even when the system time changes on the fly like this:
void WebClientThread::run()
{
// Configure the web client socket
m_socket = new QTcpSocket();
connect(m_socket, SIGNAL(disconnected()), this, SLOT(disconnected()));
connect(m_socket, SIGNAL (error(QAbstractSocket::SocketError)), this, SLOT(socketError(QAbstractSocket::SocketError)));
// Create the actual web client = worker
WebClient client(m_socket, m_configuration, m_pEventConnection, m_pThumbnailStreams, m_server, m_macAddress, 0 );
// Make this thread a loop,
exec();
delete m_socket;
m_pLog->LOG(L_INFO, "Webclient thread finished");
}
//=======================================================================
//
// Client disconnect
//
void WebClientThread::disconnected()
{
exit(0);
}
Why would deleteLater() crash the application when the system time is shifted ?
Additional information:
OS = embedded linux 3.0.0. Qt = 4.8
The socket is a connection between our Qt web server application and the front end server = lighttpd. Could it be that lighttpd closes the socket when the system time shifts 47 years and the request is still being processed by our web server?
I could reproduce it by sending requests to the server while in parallel running a script that sets date to 1980, 1990 and 2000. It changes once a second.
This smells of wrong use of Qt threads. I suggest you do not subclass QThread, if you call exec() from its run() method, because it's just too easy to do things wrong way if you do that.
See for example https://wiki.qt.io/QThreads_general_usage to see how to set up a worker QObject for a QThread, but the gist of it is, create subclass of QObject and put your code there. Then move an instance of that to a QThread instance, and connect signals and slots to make things happen.
Another things, you normally shouldn't use threads for Qt Networking, like QTcpSocket. Qt is event based and asynchronous, and as long as you just use signals and slots and never block in your slot methods, there is no need for threads, they only complicate things for no benefit. Only if you have time-consuming calculations, or if your program truly needs to utilize multiple CPU cores to achieve good enough performance, only then look into multithreading.
We use Puhser in our application in order to have real-time updates.
Something very stange happens - while google analytics says that we have around 200 simultaneous connections, Pusher says that we have 1500.
I would like to monitor Pusher connections in real-time but could not find any method to do so. Somebody can help??
Currently there's no way to get realtime stats on the number of connections you currently have open for your app. However, it is something that we're investigating currently.
In terms of why the numbers vary between Pusher and Google Analytics, it's usually down to the fact that Google Analytics uses different methods of tracking whether or not a user is on the site. We're confident that our connection counting is correct, however, that's not to say that there isn't a potentially unexpected reason for your count to be high.
A connection is counted as a WebSocket connection to Pusher. When using the Pusher JavaScript library a new WebSocket connection is created when you create a new Pusher instance.
var pusher = new Pusher('APP_KEY');
Channel subscriptions are created over the existing WebSocket connection (known as multiplexing), and do not count towards your connection quota (there is no limit on the number allowed per connection).
var channel1 = pusher.subscribe('ch1');
var channel2 = pusher.subscribe('ch2');
// All done over as single connection
// more subscriptions
// ...
var channel 100 = pusher.subscribe('ch100');
// Still just a 1 connection
Common reasons why connections are higher than expected
Users open multiple tabs
If a user has multiple tabs open to the same application, multiple instances of Pusher will be created and therefore multiple connections will be used e.g. 2 tabs open will mean 2 connections are established.
Incorrectly coded applications
As mentioned above, a new connection is created every time a new Pusher object is instantiated. It is therefore possible to create many connections in the same page.
Using an older version of one our libraries
Our connection strategies have improved over time, and we recommend that you keep up to date with the latest versions.
Specifically, in newer versions of our JS library, we carry out ping-pong requests between server and client to verify that the client is still around.
Other remedies
While our efforts are always to keep a connection going indefinitely to an application, it is possible to disconnect manually if you feel this works in your scenario. It can be achieved by making a call to Pusher.disconnect(). Below is some example code:
var pusher = new Pusher("APP_KEY");
var timeoutId = null;
function startInactivityCheck() {
timeoutId = window.setTimeout(function(){
pusher.disconnect();
}, 5 * 60 * 1000); // called after 5 minutes
};
// called by something that detects user activity
function userActivityDetected(){
if(timeoutId !== null) {
window.clearTimeout(timeoutId);
}
startInactivityCheck();
};
How this disconnection is transmitted to the user is up to you but you may consider prompting them to let them know that they will not receive any further real-time updates due to a long period of inactivity. If they wish to start receiving real-time updates again they should click a button.
My problem referres to the statistics that are displayed in the management plugin. When not used rabbitmq stats looks like that:
I am using rabbitmq to create a REQ/REP socket. For each connected client a new queue is created. So we have 4 queues now:
However I don't understand the other numbers.
Why are there 8 exchanges initially? (after fresh install)
Why are there 2 queues initially? (after fresh install)
Why did the other numbers jump from 0 to 4 while I have just 2 clients?
Is this because of the REQ/REP?
Update: I have two application communicating with each other. On the one side I have
var context = require('rabbit.js').createContext('amqp://localhost');
var rep = context.socket('REP', {
prefetch: 1,
persistent: false
});
rep.connect(someIdentifier);
rep.setEncoding('utf8');
rep.on('data', function(data) {
//got a request
});
And on the other:
var context = require('rabbit.js').createContext('amqp://localhost');
var req = context.socket('REQ');
req.setEncoding('utf8');
req.connect(sameIdAsAbove);
req.on('data', function(data) {
//got a response
});
6 default exchanges are one of each exchange type + their aliases exchanges (see Exchanges and Exchange Types section in AMQP 0-9-1 Model Explained.
The next 2 exchanges are amq.rabbitmq.trace (topic type), the one from Firehose Tracer and amqp.rabbitmq.log (also topic type) from where you can consume log entries during debugging (just bind by # key for example).
These exchanges created in every vhost, by the way. The amq prefix comes from AMQP conventions to name AMQP related entities with amq prefix. The rabbitmq part stands for RabbitMQ-specific features.
So it all about conventions.
As to 2 default queues, it really depends of your installation type, while default config may vary. Vanilla RabbitMQ installation gives you no queues.
If you have 4 active consumers (process that waiting for a new message to appear in queue) that stay connected they will utilize at least one connection each and one channel per connection.
Why your queues number changes is hard to say without seen actual code.
Update:
4 connections and 4 channels (to communicate with AMQP broker you need to open at least one channel, it's described in 4.3 Connection Multiplexing section in AMQP protocol) comes that underlying implementation creates duplex stream (one for each application instance) that probably use two connections to makes read and write events happen independently.
P.S.: actually, fresh install may import pre-defined config and configure many other options from access policy, vhosts, users, exchanges, queues, bindings to HA, clustering and many other.
I am writing a web service which has to be able to reply to multiple http requests.
From what I understand, I will need to deal with HttpListener.
What is the best method to receive a http request(or better, multiple http requests), translate it and send the results back to the caller? How safe is to use HttpListeners on threads?
Thanks
You typically set up a main thread that accepts connections and passes the request to be handled by either a new thread or a free thread in a thread pool. I'd say you're on the right track though.
You're looking for something similar to:
while (boolProcessRequests)
{
HttpListenerContext context = null;
// this line blocks until a new request arrives
context = listener.GetContext();
Thread T = new Thread((new YourRequestProcessorClass(context)).ExecuteRequest);
T.Start();
}
Edit Detailed Description If you don't have access to a web-server and need to roll your own web-service, you would use the following structure:
One main thread that accepts connections/requests and as soon as they arrive, it passes the connection to a free threat to process. Sort of like the Hostess at a restaurant that passes you to a Waiter/Waitress who will process your request.
In this case, the Hostess (main thread) has a loop:
- Wait at the door for new arrivals
- Find a free table and seat the patrons there and call the waiter to process the request.
- Go back to the door and wait.
In the code above, the requests are packaged inside the HttpListernContext object. Once they arrive, the main thread creates a new thread and a new RequestProcessor class that is initialized with the request data (context). The RequsetProcessor then uses the Response object inside the context object to respond to the request. Obviously you need to create the YourRequestProcessorClass and function like ExecuteRequest to be run by the thread.
I'm not sure what platform you're on, but you can see a .Net example for threading here and for httplistener here.