Minecraft Server Query Protocol. What does SessionId field identify? - protocols

I'm writing some app with this protocol. And I don't understand what does SessionId field identify. There are two ways.
First, it's used to identify an every single request-response packages pair and I should generate it again for every request.
Or, it's used to identify clients which tries to get status of server and I should generate it once for every server I connect to.
Are there any ideas?

Okay. I got it. SessionId can contain any Int32 number if SessionId & 0x0f0f0f0f == SessionId. What it identifies is about what you want to do. All ways are correct.
For my project I generate it randomly for each packet I send.

Related

Node.js sockets.io need for synchronization?

I'm currently working with node.js, using the socket.io library, to implement a simple chat application. In this applicatio, for irrelevant reasons, I want to setup a system in which a client can ask for a piece of information to the server. The server then broadcasts this request to all other online sockets which will respond with the the answer if they have it. The server then finally returns (the first) response it receives to this original client socket that made the request.
Naturally, the client might receive multiple responses, while only one is needed. Therefor, as soon as one has been received, the others should be discarded. However, it feels like I should use some kind of synchronized datastructure/code to make sure this check for "If an answer has already been received" works as intended.
I've done some searching on this subject but I've seen several mentions of node.js using an event-driven model and not requiring any synchronized code/datastructures, as there are no multiple threads. Is this true? Would my scenario not require any kind of special attention to synchronization and would it just work? Or would I need to use some synchronization methods and if so, which ones?
Code example:
socket.on('new_response', async data => {
await processResponse(data)
});
Due to the fact I am working with encryption I have to make use of async/await, which further complicates things. The processResponse function does a check whether a response has been received already, if not, it processes it, else, it ignores it.
I would suggest something as simple as including a uniqueID in each broadcast to the clients asking if they have a piece of information. The clients then include that same uniqueID in any response they send.
With that, your server can receive answers from the clients and just keep track of which uniqueID values it has already received an answer for and then, if an answer has already been received for that uniqueID, then it would just ignore the later clients that respond.
The uniqueID is server-side generated so it can literally just be an increasing number. You can store the numbers used so far in a server-side Set object so you can quickly look up if you've already received a response for that uniqueID.
Then, the only thing left to do is to age these uniqueIDs away in the Set at some point so they don't accumulate forever. A simple way to do that would be just replace the Set object with a second one every 15 minutes or so, keeping one older generation around so you can check both of them.

How to persist HTTP response in redis

I am creating a long-polling chat application on nodeJS without using Socket.io and scaling it using clusters.
I have to find a way to store all the long-polled HTTP requests and response objects in such a way that it is available across all node clusters(so that when a message is received for a long-polled request, I can get that request and respond to it)
I have tried using redis, however, when I stringify http request and response objects, I get "Cannot Stringify Cyclic Structure" Error.
Maybe I am approaching it in a wrong way. In that case, how do we generally implement lon-polling across different clusters?
What you're asking seems to be a bit confused.
In a long-polling situation, a client makes an http request that is routed to a specific HTTP server. If no data to satisfy that request is immediately available, the request is then kept alive for some extended period of time and either it will eventually timeout and the client will then issue another long polling request or some data will become available and a response will be returned to the request.
As such, you do not make this work in clusters by trying to centrally save request and response objects. Those belong to a specific TCP connection between a specific server and a specific client. You can't save them and use them elsewhere and it also isn't something that helps any of this work with clustering either.
What I would think the clustering problem you have here is that when some data does become available for a specific client, you need to know which server that client has a long polling request that is currently live so you can instruct that specific server to return the data from that request.
The usual way that you do this is you have some sort of userID that represents each client. When any client connects in with a long polling request, that connection is cluster distributed to one of your servers. That server that gets the request, then writes to a central database (often redis) that this userID userA is now connected to server12. Then, when some data becomes available for userA, any agent can lookup that user in the redis store and see that the user is currently connected to server12. So, they can instruct server12 to send the data to userA using the current long polling connection for userA.
This is just one strategy for dealing with clustering - there are many others such as sticky load balancing, algorithmic distribution, broadcast distribution, etc... You can see an answer that describes some of the various schemes here.
If you are sure you want to store all the request and responses, have a look at this question.
Serializing Cyclic objects
you can also try cycle.js
However, I think you would only be interested in serializing few elements from request/response. An easier (probably better too) approach would be to just copy the required key/value pairs from request/response object in to a separate object and store them.

Can you rely on socket.id as a UID

Excuse my ignorance, day 2 of node.js/socket.io
I'm looking for a way to uniquely identify users for use in a database queuing system. I read a lot about using Express's session cookie, however I've noticed socket.id seems to be an UID that socket.io is already using.
Therefore I have been using socket.id to identify my users both in the database, and in creating private "rooms" to communicate with just them.
Is this a terrible idea?
A socket ID is just that - it uniquely identifies a socket. It doesn't uniquely identify a user, and it's definitely not intended to be used for that purpose. A single user (in many applications) might have multiple connections (and therefore multiple sockets with different ID's). Also, every time they connect they will be assigned a new ID.
So you obviously shouldn't use a socket.id as a user ID. Mustafa points out that you could reassign socket.id to a user ID, but I tend to think that's a very bad idea for two reasons:
socket.id is supposed to uniquely identify a socket, so you would run into problems when a single user has multiple sockets open.
Socket.IO, uses that ID internally a lot for storing things in hashtables, and if you change the ID, you might get unexpected results and hard to track down bugs. I haven't tested it, but looking at the Socket.IO source, that's what I would expect.
Better to generate ID's using another method then associate a user with a socket (for example, during the handshake using data from the cookie).
socket.set(key, value, callback) is the method explicitly intended to be used for associating your own data (like a user ID) with a socket connection, and is the only one guaranteed to be safe.
When the socket.io sockets are created, you can add variables as you wish to socket object. socket.userid = getUserID() will work fine. It is better to make assinging UIDs in database, and add them to socket objects when their authentication is succesful.

Calculating the Ping of a WebSocket Connection?

small question. How can I calculate the ping of a WebSocket connection?
The server is set up using Node.js and node-websocket-server, if that matters at all.
There is few ways. One that is offered by Raynos - is wrong. Because client time and server time are different, and you cannot compare them.
Solution with sending timestamp is good, but it has one issue. If server logic does some decisions and calculations based on ping, then sending timestamp, gives risk that client software or MITM will modify timestamp, that way it will give another results to server.
Much better way, is sending packet to client with unique ID, that is not increment number, but randomized. And then server will expecting from client "PONG" message with this ID.
Size of ID should be same, I recommend 32bits (int).
That way server sends "PING" with unique ID and store timestamp of the moment message sent, and then waiting until it will receive response "PONG" with same ID from Client, and will calculate latency of round-trip based on stored timestamp and new one on the moment of receiving PONG message.
Don't forget to implement case with timeout, to prevent lost PING/PONG packet stop the process of checking latency.
As well WebSockets has special packet opcode called PING, but example from post above is not using this feature. Read this official document that describes this specific opcode, it might be helpful if you are implementing your own WebSockets protocol on server side: https://www.rfc-editor.org/rfc/rfc6455#page-37
To calculate the latency you really should complete the round-trip. You should have a ping message that has a timestamp in it. When one side or the other receives a ping it should change it to a pong (or gnip or whatever) but keep the original timestamp and send it back to the sender. Then the original sender can compare the timestamp to the current time to see what the roundtrip latency is. If you need the one way latency divide by 2. The reason you need to do it this way is that without some very sophisticated time skew algorithms, the time on one host vs another is not going to be comparable at small time deltas like this.
Websockets have a ping type message which the server can respond to with a pong type message. See this for more info about websockets.
You can send a request over the web socket with Date.now() as data and compare it to Date.now() on the server.
This gives you the time difference between sending the packet and receiving it plus any handling time on either end.

Need ideas for securing a JMS based server process and database

I have a tool that is distributed freely as an Eclipse plugin, which means that I can't track who uses it or ask them to register.
Every client tool communicate via a JMS broker with a single shared server process (written in Java) and can receive messages in reply. The server connects via Hibernate to a MySQL database.
At present, the only message that the tool sends is a request for data, and the server gets the message and sends a bulk of XML data representing elements to the client, which displays corresponding items in the IDE. Hence, I don't think that there is much that can be done to the server except a DoS attack.
Now, however, I want to add the following functionality: a user can assign a rating to a particular element (identified by a numeric id), and a message will be sent to the server which will store the rating as an event in a rating event table. When next requests for data come in, the average rating for each item will be sent with the request.
My problem is that I've never deployed a tool that used a public server like this, even if it is hidden by the JMS broker. What attacks could be deployed against me and how can I defend against them?
There's the problem of DoS, and I'm not sure how to address it.
There's the possibility of injection, but all my data is numeric and I don't know how hibernate deals with things.
There's the problem of spam or dummy-voting, and I can't really think of how to address that.
I'm sure there are others...
With regard to the dummy voting, this is not secure (i.e. it wouldn't be acceptable for electoral purposes!) but it is a simple mechanism:
Create a GUID on the server, store it in an appropriate table and send to client. When client votes, it sends back the GUID, which is compared to the Database. If the GUID is valid, accept the vote and remove the DB stored GUID.

Resources