I have a question about a situation I am in currently which I have a solution for but am not quite sure if it 100% solves the issue at hand as I do not have tests written that could validate my solution.
I would love your oppinion on the matter and maybe a suggestion of a more elegant solution or possibly even a way to avoid the issue completely.
Here it is:
I am making a game where you may create or join open rooms/games.
There is a gamelist in the UI and when you click a game you attempt to join that game.
Each game has a bet (amount of credit that you win or lose) that the creator set which anyone joining must match.
On the serverside, before I let the player actually join the room, I must validate that his credit balance is sufficient to match the bet of the game he is joining. This will be via an API call.
Now, if two players join the game at once, lets say the validation of the first player joining takes 3 seconds but the validation of the second only 1 second.
Since rooms are 1 vs 1 I must not let a player join if someone else already did.
I can do this simply by checking if theres a player in the game already:
// game already full
if (game.p2) {
return socket.emit("join_game_reply", {
err: "Someone else already joined."
})
}
But, the issue at hand is, after that check, I must validate the balance.
So we get something like this:
socket.on("join_game", data => {
const game = openGames[data.gameId}
// game already full
if (game.p2) {
return socket.emit("join_game_reply", {
err: "Someone else already joined."
})
}
// check if users balance is sufficient to match bet of room creator
verifyUserBalance(socket.player, game.bet)
.then(sufficient => {
if(sufficient){
// join game
game.p2 = socket.player
}
})
})
The issue here:
What if at the time playerX clicks join the game is open, validation starts but while validating playerY joins and finishes validation before playerX and therefore is set as game.p2. Validation of playerX finished shortly after and the server then continues to set game.p2 to playerX, leaving playerY with a UI state of ingame even though on the server he is not anymore.
The solution I have is to literally just do the check again after validation:
socket.on("join_game", data => {
const game = openGames[data.gameId}
// game already full
if (game.p2) {
return socket.emit("join_game_reply", {
err: "Someone else already joined."
})
}
// check if users balance is sufficient to match bet of room creator
verifyUserBalance(socket.player, game.bet)
.then(sufficient => {
if(sufficient){
// join game
if (game.p2) {
return socket.emit("join_game_reply", {
err: "Someone else already joined."
})
game.p2 = socket.player
}
}
})
})
The reason I think this works is because nodeJS is single threaded and I can therefore make sure that after validating I only let players join if no one else joined in the meantime.
After writing this up I actually feel pretty confident that it will work so please let me in on my mistakes if you see any! Thanks a lot for taking the time!
Your code will work, but I think this is bootstrapping for short-term and you will have to change it in the mid-term.
It will work
A. if you have only one server.
B. If your server is not crashing
C. If you have just one synchronous action (here game.p2 = socket.player)
A. Multiple servers
To scale up your infra, I'm afraid it won't work.
You should not use nodejs variables (as openGames) to store data but retrieve them from a cache database (as redis). This redis database will be you single source of truth.
B. If server crash
The same kind of problems will happen if your server crash (for any reason, like full disk ...) You will lose all your data stored in nodejs variables.
C. Multiple actions
If you want to add one action (like putting the bet amount in escrow) in your workflow, you will need to catch the failure if this action (and the failure of the room joining) and guarantee that there is a all-or-nothing mechanism (escrow+joining or nothing).
You can manage it in your code but it will become quite complex.
Transactions
When dealing with money + actions, I think you should use transactions features of databases. I would use for example Redis Transactions.
You need to make "verify and join" an atomic operation on the server so nobody else can cause a race condition. There are many different ways to approach a solution. The best solution would only impact that particular game, not impacting the processing of joining other games.
Here's one idea:
Create a means of "provisionally joining a game". This will essentially reserve your spot in the game while you then check to see if the user verifies for the game. This prevents anyone else from joining a game you were first at and are in the process of verifying.
When someone else comes in to provisionally join a game, but the game already has a provisional user, the join function can return a promise that is not yet resolved. If the previous provisional join verifies and finishes, then this promise will reject because the game is already full. If the other provisional join fails to verify, then the first one to request a provisional join will resolve and it can then go on with the verification process.
If this second user verifies correctly and converts to a completed join to the game, it will reject any other waiting promises for other provisional joins to this game. If it fails to verify, then it goes back to step 2 and the next one waiting gets a chance.
In this way, each game essentially has a queue of users waiting to get into the game. The queue hangs around as long as the game isn't full of verified users so whenever anyone doesn't verify, the next one in the queue gets a shot at joining.
For performance and user-experience reasons, you may want to implement a timeout on waiting in the queue and you may want to limit how many users can be in the queue (probably no point in allowing 100 users to be in the queue since it's unlikely they will all fail to verify).
It's important to understand that the verify and join needs to be all implemented on the server because that's the only way you can assure the integrity of the process and control it to avoid race conditions.
I am trying to access pixel data and save images from an in-game camera to disk. Initially, the simple approach was to use a render target and subsequently RenderTarget->ReadPixels(), but as the native implementation of ReadPixels() contains a call to FlushRenderingCommands(), it would block the game thread until the image is saved. Being a computationally intensive operation, this was lowering my FPS way too much.
To solve this problem, I am trying to create a dedicated thread that can access the camera as a CaptureComponent, and then follow a similar approach. But as the FlushRenderingCommands() block can only be called from a game thread, I had to rewrite ReadPixels() without that call, (in a non-blocking way of sorts, inspired by the tutorial at https://wiki.unrealengine.com/Render_Target_Lookup): but even then I am facing a problem with my in-game FPS being jerky whenever an image is saved (I confirmed this is not because of the actual saving to disk operation, but because of the pixel data access). My rewritten ReadPixels() function looks as below, I was hoping to get some suggestions as to what could be going wrong here. I am not sure if ENQUEUE_UNIQUE_RENDER_COMMAND_ONEPARAMETER can be called from a non-game thread, and if that's part of my problem.
APIPCamera* cam = GameThread->CameraDirector->getCamera(0);
USceneCaptureComponent2D* capture = cam->getCaptureComponent(EPIPCameraType::PIP_CAMERA_TYPE_SCENE, true);
if (capture != nullptr) {
if (capture->TextureTarget != nullptr) {
FTextureRenderTargetResource* RenderResource = capture->TextureTarget->GetRenderTargetResource();
if (RenderResource != nullptr) {
width = capture->TextureTarget->GetSurfaceWidth();
height = capture->TextureTarget->GetSurfaceHeight();
// Read the render target surface data back.
struct FReadSurfaceContext
{
FRenderTarget* SrcRenderTarget;
TArray<FColor>* OutData;
FIntRect Rect;
FReadSurfaceDataFlags Flags;
};
bmp.Reset();
FReadSurfaceContext ReadSurfaceContext =
{
RenderResource,
&bmp,
FIntRect(0, 0, RenderResource->GetSizeXY().X, RenderResource->GetSizeXY().Y),
FReadSurfaceDataFlags(RCM_UNorm, CubeFace_MAX)
};
ENQUEUE_UNIQUE_RENDER_COMMAND_ONEPARAMETER(
ReadSurfaceCommand,
FReadSurfaceContext, Context, ReadSurfaceContext,
{
RHICmdList.ReadSurfaceData(
Context.SrcRenderTarget->GetRenderTargetTexture(),
Context.Rect,
*Context.OutData,
Context.Flags
);
});
}
}
}
EDIT: One more thing I have noticed is that the stuttering goes away if I disable HDR in my render target settings (but this results in low quality images): so it seems plausible that the size of the image, perhaps, is still blocking one of the core threads because of the way I am implementing it.
It should be possible to call ENQUEUE_UNIQUE_RENDER_COMMAND_ONEPARAMETER from any thread since there is underlying call of Task Graph. You can see it, when you analize what code this macro generates:
if(ShouldExecuteOnRenderThread())
{
CheckNotBlockedOnRenderThread();
TGraphTask<EURCMacro_##TypeName>::CreateTask().ConstructAndDispatchWhenReady(ParamValue1);
}
You should be cautious about accessing UObjects (like USceneCaptureComponent2D) from different threads cause these are managed by Garbage Collector and own by game thread.
(...) but even then I am facing a problem with my in-game FPS being jerky whenever an image is saved
Did you check what thread is causing FPS drop with stat unit or stat unitgraph command? You could also use profiling tools to perform more detailed insight and make sure there is no other causes of lag.
Edit:
I've found yet another method of accessing pixel data. Try this without actually copying data in for loop and check, if there is any improvement in FPS. This could be a bit faster cause there is no pixel manipulation/conversion in-between.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
In what areas of programming would I use state machines ? Why ? How could I implement one ?
EDIT: please provide a practical example , if it's not too much to ask .
In what areas of programming would I use a state machine?
Use a state machine to represent a (real or logical) object that can exist in a limited number of conditions ("states") and progresses from one state to the next according to a fixed set of rules.
Why would I use a state machine?
A state machine is often a very compact way to represent a set of complex rules and conditions, and to process various inputs. You'll see state machines in embedded devices that have limited memory. Implemented well, a state machine is self-documenting because each logical state represents a physical condition. A state machine can be embodied in a tiny amount of code in comparison to its procedural equivalent and runs extremely efficiently. Moreover, the rules that govern state changes can often be stored as data in a table, providing a compact representation that can be easily maintained.
How can I implement one?
Trivial example:
enum states { // Define the states in the state machine.
NO_PIZZA, // Exit state machine.
COUNT_PEOPLE, // Ask user for # of people.
COUNT_SLICES, // Ask user for # slices.
SERVE_PIZZA, // Validate and serve.
EAT_PIZZA // Task is complete.
} STATE;
STATE state = COUNT_PEOPLE;
int nPeople, nSlices, nSlicesPerPerson;
// Serve slices of pizza to people, so that each person gets
/// the same number of slices.
while (state != NO_PIZZA) {
switch (state) {
case COUNT_PEOPLE:
if (promptForPeople(&nPeople)) // If input is valid..
state = COUNT_SLICES; // .. go to next state..
break; // .. else remain in this state.
case COUNT_SLICES:
if (promptForSlices(&nSlices))
state = SERVE_PIZZA;
break;
case SERVE_PIZZA:
if (nSlices % nPeople != 0) // Can't divide the pizza evenly.
{
getMorePizzaOrFriends(); // Do something about it.
state = COUNT_PEOPLE; // Start over.
}
else
{
nSlicesPerPerson = nSlices/nPeople;
state = EAT_PIZZA;
}
break;
case EAT_PIZZA:
// etc...
state = NO_PIZZA; // Exit the state machine.
break;
} // switch
} // while
Notes:
The example uses a switch() with explicit case/break states for simplicity. In practice, a case will often "fall through" to the next state.
For ease of maintaining a large state machine, the work done in each case can be encapsulated in a "worker" function. Get any input at the top of the while(), pass it to the worker function, and check the return value of the worker to compute the next state.
For compactness, the entire switch() can be replaced with an array of function pointers. Each state is embodied by a function whose return value is a pointer to the next state. Warning: This can either simplify the state machine or render it totally unmaintainable, so consider the implementation carefully!
An embedded device may be implemented as a state machine that exits only on a catastrophic error, after which it performs a hard reset and re-enters the state machine.
Some great answers already. For a slightly different perspective, consider searching a text in a larger string. Someone has already mentioned regular expressions and this is really just a special case, albeit an important one.
Consider the following method call:
very_long_text = "Bereshit bara Elohim et hashamayim ve'et ha'arets." …
word = "Elohim"
position = find_in_string(very_long_text, word)
How would you implement find_in_string? The easy approach would use a nested loop, something like this:
for i in 0 … length(very_long_text) - length(word):
found = true
for j in 0 … length(word):
if (very_long_text[i] != word[j]):
found = false
break
if found: return i
return -1
Apart from the fact that this is inefficient, it forms a state machine! The states here are somewhat hidden; let me rewrite the code slightly to make them more visible:
state = 0
for i in 0 … length(very_long_text) - length(word):
if very_long_text[i] == word[state]:
state += 1
if state == length(word) + 1: return i
else:
state = 0
return -1
The different states here directly represent all different positions in the word we search for. There are two transitions for each node in the graph: if the letters match, go to the next state; for every other input (i.e. every other letter at the current position), go back to zero.
This slight reformulation has a huge advantage: it can now be tweaked to yield better performance using some basic techniques. In fact, every advanced string searching algorithm (discounting index data structures for the moment) builds on top of this state machine and improves some aspects of it.
What sort of task?
Any task but from what I have seen, Parsing of any sort is frequently implemented as a state machine.
Why?
Parsing a grammar is generally not a straightforward task. During the design phase it is fairly common that a state diagram is drawn to test the parsing algorithm. Translating that to a state machine implementation is a fairly simple task.
How?
Well, you are limited only by your imagination.
I have seen it done with case statements and loops.
I have seen it done with labels and goto statements
I have even seen it done with structures of function pointers which represent the current state. When the state changes, one or more function pointer is updated.
I have seen it done in code only, where a change of state simply means that you are running in a different section of code. (no state variables, and redundent code where necessary. This can be demonstrated as a very simply sort, which is useful for only very small sets of data.
int a[10] = {some unsorted integers};
not_sorted_state:;
z = -1;
while (z < (sizeof(a) / sizeof(a[0]) - 1)
{
z = z + 1
if (a[z] > a[z + 1])
{
// ASSERT The array is not in order
swap(a[z], a[z + 1]; // make the array more sorted
goto not_sorted_state; // change state to sort the array
}
}
// ASSERT the array is in order
There are no state variables, but the code itself represents the state
The State design pattern is an object-oriented way to represent the state of an object by means of a finite state machine. It usually helps to reduce the logical complexity of that object's implementation (nested if's, many flags, etc.)
Most workflows can be implemented as state machines. For example, processing leave applications or orders.
If you're using .NET, try Windows Workflow Foundation. You can implement a state machine workflow quite quickly with it.
If you're using C#, any time you write an iterator block you're asking the compiler to build a state machine for you (keeping track of where you are in the iterator etc).
Here is a tested and working example of a state machine. Say you are on a serial stream (serial port, tcp/ip data, or file are typical examples). In this case I am looking for a specific packet structure that can be broken into three parts, sync, length, and payload. I have three states, one is idle, waiting for the sync, the second is we have a good sync the next byte should be length, and the third state is accumulate the payload.
The example is purely serial with only one buffer, as written here it will recover from a bad byte or packet, possibly discarding a packet but eventually recovering, you can do other things like a sliding window to allow for immediate recovery. This would be where you have say a partial packet that is cut short then a new complete packet starts, the code below wont detect this and will throw away the partial as well as the whole packet and recover on the next. A sliding window would save you there if you really needed to process all the whole packets.
I use this kind of a state machine all the time be it serial data streams, tcp/ip, file i/o. Or perhaps tcp/ip protocols themselves, say you want to send an email, open the port, wait for the server to send a response, send HELO, wait for the server to send a packet, send a packet, wait for the reply, etc. Essentially in that case as well as in the case below you may be idling waiting for that next byte/packet to come in. To remember what you were waiting for, also to re-use the code that waits for something you can use state variables. The same way that state machines are used in logic (waiting for the next clock, what was I waiting for).
Just like in logic, you may want to do something different for each state, in this case if I have a good sync pattern I reset the offset into my storage as well as reset the checksum accumulator. The packet length state demonstrates a case where you may want to abort out of the normal control path. Not all, in fact many state machines may jump around or may loop around within the normal path, the one below is pretty much linear.
I hope this is useful and wish that state machines were used more in software.
The test data has intentional problems with it that the state machine recovers from. There is some garbage data after the first good packet, a packet with a bad checksum, and a packet with an invalid length. My output was:
good packet:FA0712345678EB
Invalid sync pattern 0x12
Invalid sync pattern 0x34
Invalid sync pattern 0x56
Checksum error 0xBF
Invalid packet length 0
Invalid sync pattern 0x12
Invalid sync pattern 0x34
Invalid sync pattern 0x56
Invalid sync pattern 0x78
Invalid sync pattern 0xEB
good packet:FA081234567800EA
no more test data
The two good packets in the stream were extracted despite the bad data. And the bad data was detected and dealt with.
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
unsigned char testdata[] =
{
0xFA,0x07,0x12,0x34,0x56,0x78,0xEB,
0x12,0x34,0x56,
0xFA,0x07,0x12,0x34,0x56,0x78,0xAA,
0xFA,0x00,0x12,0x34,0x56,0x78,0xEB,
0xFA,0x08,0x12,0x34,0x56,0x78,0x00,0xEA
};
unsigned int testoff=0;
//packet structure
// [0] packet header 0xFA
// [1] bytes in packet (n)
// [2] payload
// ... payload
// [n-1] checksum
//
unsigned int state;
unsigned int packlen;
unsigned int packoff;
unsigned char packet[256];
unsigned int checksum;
int process_packet( unsigned char *data, unsigned int len )
{
unsigned int ra;
printf("good packet:");
for(ra=0;ra<len;ra++) printf("%02X",data[ra]);
printf("\n");
}
int getbyte ( unsigned char *d )
{
//check peripheral for a new byte
//or serialize a packet or file
if(testoff<sizeof(testdata))
{
*d=testdata[testoff++];
return(1);
}
else
{
printf("no more test data\n");
exit(0);
}
return(0);
}
int main ( void )
{
unsigned char b;
state=0; //idle
while(1)
{
if(getbyte(&b))
{
switch(state)
{
case 0: //idle
if(b!=0xFA)
{
printf("Invalid sync pattern 0x%02X\n",b);
break;
}
packoff=0;
checksum=b;
packet[packoff++]=b;
state++;
break;
case 1: //packet length
checksum+=b;
packet[packoff++]=b;
packlen=b;
if(packlen<3)
{
printf("Invalid packet length %u\n",packlen);
state=0;
break;
}
state++;
break;
case 2: //payload
checksum+=b;
packet[packoff++]=b;
if(packoff>=packlen)
{
state=0;
checksum=checksum&0xFF;
if(checksum)
{
printf("Checksum error 0x%02X\n",checksum);
}
else
{
process_packet(packet,packlen);
}
}
break;
}
}
//do other stuff, handle other devices/interfaces
}
}
State machines are everywhere. State machines are key in communications interfaces where a message needs to be parsed as it is received. Also, there have been many times in embedded systems development that I've needed to separate a task into multiple tasks because of strict timing constraints.
A lot of digital hardware design involves creating state machines to specify the behaviour of your circuits. It comes up quite a bit if you're writing VHDL.
QA infrastructure, intended to screen-scrape or otherwise run through a process under test. (This is my particular area of experience; I built a state machine framework in Python for my last employer with support for pushing the current state onto a stack and using various methods of state handler selection for use in all our TTY-based screen scrapers). The conceptual model fits well, as running through a TTY application, it goes through a limited number of known states, and can be moved back into old ones (think about using a nested menu). This has been released (with said employer's permission); use Bazaar to check out http://web.dyfis.net/bzr/isg_state_machine_framework/ if you want to see the code.
Ticket-, process-management and workflow systems -- if your ticket has a set of rules determining its movement between NEW, TRIAGED, IN-PROGRESS, NEEDS-QA, FAILED-QA and VERIFIED (for example), you've got a simple state machine.
Building small, readily provable embedded systems -- traffic light signaling is a key example where the list of all possible states has to be fully enumerated and known.
Parsers and lexers are heavily state-machine based, because the way something streaming in is determined is based on where you're at at the time.
A FSM is used everywhere you have multiple states and need to transition to a different state on stimulus.
(turns out that this encompasses most problems, at least theoretically)
Regular expressions are another example of where finite state machines (or "finite state automata") come into play.
A compiled regexp is a finite state machine, and
the sets of strings that regular expressions can match are exactly the languages that finite state automata can accept (called "regular languages").
I have an example from a current system I'm working on. I'm in the process of building a stock trading system. The process of tracking the state of an order can be complex, but if you build a state diagram for the life cycle of an order it makes applying new incoming transactions to the existing order much simpler. There are many fewer comparisons necessary in applying that transaction if you know from its current state that the new transaction can only be one of three things rather than one of 20 things. It makes the code much more efficient.
I didn't see anything here that actually explained the reason I see them used.
For practical purposes, a programmer usually has to add one when he is forced to return a thread/exit right in the middle of an operation.
For instance, if you have a multi-state HTTP request, you might have server code that looks like this:
Show form 1
process form 1
show form 2
process form 2
The thing is, every time you show a form, you have to quit out of your entire thread on the server (in most languages), even if your code all flows together logically and uses the same variables.
The act of putting a break in the code and returning the thread is usually done with a switch statement and creates what is called a state machine (Very Basic Version).
As you get more complex, it can get really difficult to figure out what states are valid. People usually then define a "State Transition Table" to describe all the state transitions.
I wrote a state machine library, the main concept being that you can actually implement your state transition table directly. It was a really neat exercise, not sure how well it's going to go over though...
Finite state machines can be used for morphological parsing in any natural language.
Theoretically, this means that morphology and syntax are split up between computational levels, one being at most finite-state, and the other being at most mildly context sensitive (thus the need for other theoretical models to account for word-to-word rather than morpheme-to-morpheme relationships).
This can be useful in the area of machine translation and word glossing. Ostensibly, they're low-cost features to extract for less trivial machine learning applications in NLP, such as syntactic or dependency parsing.
If you're interested in learning more, you can check out Finite State Morphology by Beesley and Karttunen, and the Xerox Finite State Toolkit they designed at PARC.
State driven code is a good way to implement certain types of logic (parsers being an example). It can be done in several ways, for example:
State driving which bit of code is actually being executed at a given point (i.e. the state is implicit in the piece of code you are writing). Recursive descent parsers are a good example of this type of code.
State driving what to do in a conditional such as a switch statement.
Explicit state machines such as those generated by parser generating tools such as Lex and Yacc.
Not all state driven code is used for parsing. A general state machine generator is smc. It inhales a definition of a state machine (in its language) and it will spit out code for the state machine in a variety of languages.
Good answers. Here's my 2 cents. Finite State Machines are a theoretical idea that can be implemented multiple different ways, such as a table, or as a while-switch (but don't tell anybody it's a way of saying goto horrors). It is a theorem that any FSM corresponds to a regular expression, and vice versa. Since a regular expression corresponds to a structured program, you can sometimes just write a structured program to implement your FSM. For example, a simple parser of numbers could be written along the lines of:
/* implement dd*[.d*] */
if (isdigit(*p)){
while(isdigit(*p)) p++;
if (*p=='.'){
p++;
while(isdigit(*p)) p++;
}
/* got it! */
}
You get the idea. And, if there's a way that runs faster, I don't know what it is.
A typical use case is traffic lights.
On an implementation note: Java 5's enums can have abstract methods, which is an excellent way to encapsulate state-dependent behavior.