Playing a recorded game replay - replay

I have a problem with playing a recorded game replay. The replay data consists of tuples of the timestamp when user input was made and the input itsself.
Each frame while playing the replay I try to find a matching user input in the replay data. But as the time in each frame mostly never matches a timestamp in the replay data exactly , I can only fetch the closest timestamp from the replay data which averagely differs about 0.01 seconds form the current frame time. This makes the replay very imprecise.
What possibilities do I have to react on this difference?

Found the soloution myself:
If you want to record demos using only the userinput two criterias have to be fulfilled
Have a fixed game update rate, not depending on the FPS
physics have to be deterministic
because both points are not easy to implement I decided to use frame snapshots to save the demos. this results in more data to be saved but is way easier to develop.

Related

Determining the 'amount' of speaking in a video

I'm working on a project to transcribe lecture videos. We are currently just using humans to do the transcriptions as we believe it is easier to transcribe than editing ASR, especially for technical subjects (not the point of my question, though I'd love any input on this). From our experiences we've found that after about 10 minutes of transcribing we get anxious or lose focus. Thus we have been splitting videos into ~5-7 minute chunks based on logical breaks in the lecture content. However, we've found that the start of a lecture (at least for the class we are piloting) often has more talking than later on, which often has time where the students are talking among themselves about a question. I was thinking that we could do signal processing to determine the rough amount of speaking throughout the video. The idea is to break the video into segments containing roughly the same amount of lecturing, as opposed to segments that are the same length.
I've done a little research into this, but everything seems to be a bit overkill for what I'm trying to do. The videos for this course, though we'd like to generalize, contain basically just the lecturer with some occasional feedback and distant student voices. So can I just simply look at the waveform and roughly use the spots containing audio over some threshold to determine when the lecturer is speaking? Or is an ML approach really necessary to quantify the lecturer's speaking?
Hope that made sense, I can clarify anything if necessary.
Appreciate the help as I have no experience with signal processing.
Although there are machine learning mehtods that are very good at discriminating voice from other sounds, you don't seem to require that sort of accuracy for your application. A simple level-based method similar to the one you proposed should be good enough to get you an estimate of speaking time.
Level-Based Sound Detection
Goal
Given an audio sample, discriminate the portions with a high amount of sounds from the portions that consist of background noise. This can then be easily used to estimate the amount of speech in a sound file.
Overview of Method
Rather than looking at raw levels in the signal, we will first convert it to a sliding-window RMS. This gives a simple measure of how much audio energy is at any given point of the audio sample. By analyzing the RMS signal we can automatically determine a threshold for distinguishing between backgroun noise and speech.
Worked Example
I will be working this example in MATLAB because it makes the math easy to do and lets me create illustrations.
Source Audio
I am using President Kennedy's "We choose to go to the moon" speech. I'm using the audio file from Wikipedia, and just extracting the left channel.
imported = importdata('moon.ogg');
audio = imported.data(:,1);
plot(audio);
plot((1:length(audio))/imported.fs, audio);
title('Raw Audio Signal');
xlabel('Time (s)');
Generating RMS Signal
Although you could techinically implement an overlapping per-sample sliding window, it is simpler to avoid the overlapping and you'll get very similar results. I broke the signal into one-second chunks, and stored the RMS values in a new array with one entry per second of audio.
audioRMS = [];
for i = 1:imported.fs:(length(audio)-imported.fs)
audioRMS = [audioRMS; rms(audio(i:(i+imported.fs)))];
end
plot(1:length(audioRMS), audioRMS);
title('Audio RMS Signal');
xlabel('Time (s)');
This results in a much smaller array, full of positive values representing the amount of audio energy or "loudness" per second.
Picking a Threshold
The next step is to determine how "loud" is "loud enough." You can get an idea of the distribution of noise levels with a histogram:
histogram(audioRMS, 50);
I suspect that the lower shelf is the general background noise of the crowd and recording environment. The next shelf is probably the quieter applause. The rest is speech and loud crowd reactions, which will be indistinguishable to this method. For your application, the loudest areas will almost always be speech.
The minimum value in my RMS signal is .0233, and as a rough guess I'm going to use 3 times that value as my criterion for noise. That seems like it will cut off the whole lower shelf and most of the next one.
A simple check against that threshold gives a count of 972 seconds of speech:
>> sum(audioRMS > 3*min(audioRMS))
ans =
972
To test how well it actually worked, we can listen to the audio that was eliminated.
for i = 1:length(speech)
if(~speech(i))
clippedAudio = [clippedAudio; audio(((i-1)*imported.fs+1):i*imported.fs)];
end
end
>> sound(clippedAudio, imported.fs);
Listening to this gives a bit over a minute of background crowd noise and sub-second clips of portions of words, due to the one-second windows used in the analysis. No significant lengths of speech are clipped. Doing the opposite gives audio that is mostly the speech, with clicks heard as portions are skipped. The louder applause breaks also make it through.
This means that for this speech, the threshold of three times the minimum RMS worked very well. You'll probably need to fiddle with that ratio to get good automatic results for your recording environment, but it seems like a good place to start.

Questions about updating my node.js game

I am making a little game using node.js for the server and a .js file embedded in a HTML5 canvas for clients. The players each have and object they can move around with the arrow keys.
Now I have made 2 different ways of updating the game, one was sending the new position of the player everytime it changes. It worked but my server had to process around 60 x/y pairs a second(the update rate of the client is 30/sec and there were 2 players moving non-stop).
The second method was to only send new position and speed/direction of the player's object when they change their direction speed, so basically on the other clients the movement of the player was interpolated using the direction/speed from the last update. My server only had to process very few x/y7speed/direction packets, however my clients experienced a little lag when the packets arrived since the interpolated position was often a little bit away from the actual position written in the packet.
Now my questions is: Which method would you recommend? And how should I make my lag compensation for either method?
If you have low latency, interpolate from the position in which the object is drawn up the new position. In low latency it does not represent much of a difference.
If you have high latency, you can implement a kind of EPIC.
http://www.mindcontrol.org/~hplus/epic/
You can also check how it is done in Browser-Quest.
https://github.com/mozilla/BrowserQuest
Good luck!

how to simulate aloha protocol using a programming language?

I'm new to networking in general and I read about this protocol called Aloha, and I would like to make a simple simulator for the Pure version of it.
Although I understand the concepts, I find it difficult to start.
Basically we have a N senders in the network. Each sender wants to send a packet. Now each sender doesn't care if the network is busy or under occupation by some other sender. If it wants to send data, it just sends it.
The problem is that if 2 senders send some data at the same time, then they will both collide and thus both packets will be destroyed.
Since they are destroyed the two senders will need to send again the same packets.
I understand this simple concept, the difficulty is, how to modulate this using probabilities.
Now, I need to find out the throughput which is the rate of (successful) transmission of frames.
Before going any further, we have to make some assumptions:
All frames have the same length.
Stations cannot generate a frame while transmitting or trying to transmit. (That is, if a station keeps trying to send a frame, it cannot be allowed to generate more frames to send.)
The population of stations attempts to transmit (both new frames and old frames that collided) according to a Poisson distribution.
I can't really understand the third assumption, how will I apply this probability in aloha?
I can't find a single code online in order to get an idea how this would be done...
here is some further information on this protocol:
http://en.wikipedia.org/wiki/ALOHAnet#Pure_ALOHA
I think you have to split the time into intervals; in each interval a certain number of stations attempts to transmit. This number is the number of events occurring in a fixed interval or time according to http://en.wikipedia.org/wiki/Poisson_distribution.
You have to model it according to the Poisson distribution.
My 2 cents, hope this helps

How to predict when next event occurs based on previous events? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
Basically, I have a reasonably large list (a year's worth of data) of times that a single discrete event occurred (for my current project, a list of times that someone printed something). Based on this list, I would like to construct a statistical model of some sort that will predict the most likely time for the next event (the next print job) given all of the previous event times.
I've already read this, but the responses don't exactly help out with what I have in mind for my project. I did some additional research and found that a Hidden Markov Model would likely allow me to do so accurately, but I can't find a link on how to generate a Hidden Markov Model using just a list of times. I also found that using a Kalman filter on the list may be useful but basically, I'd like to get some more information about it from someone who's actually used them and knows their limitations and requirements before just trying something and hoping it works.
Thanks a bunch!
EDIT: So by Amit's suggestion in the comments, I also posted this to the Statistics StackExchange, CrossValidated. If you do know what I should do, please post either here or there
I'll admit it, I'm not a statistics kind of guy. But I've run into these kind of problems before. Really what we're talking about here is that you have some observed, discrete events and you want to figure out how likely it is you'll see them occur at any given point in time. The issue you've got is that you want to take discrete data and make continuous data out of it.
The term that comes to mind is density estimation. Specifically kernel density estimation. You can get some of the effects of kernel density estimation by simple binning (e.g. count the number events in a time interval such as every quarter hour or hour.) Kernel density estimation just has some nicer statistical properties than simple binning. (The produced data is often 'smoother'.)
That only takes care of one of your problems, though. The next problem is still the far more interesting one -- how do you take a time line of data (in this case, only printer data) and produced a prediction from it? First thing's first -- the way you've set up the problem may not be what you're looking for. While the miracle idea of having a limited source of data and predicting the next step of that source sounds attractive, it's far more practical to integrate more data sources to create an actual prediction. (e.g. maybe the printers get hit hard just after there's a lot of phone activity -- something that can be very hard to predict in some companies) The Netflix Challenge is a rather potent example of this point.
Of course, the problem with more data sources is that there's extra legwork to set up the systems that collect the data then.
Honestly, I'd consider this a domain-specific problem and take two approaches: Find time-independent patterns, and find time-dependent patterns.
An example time-dependent pattern would be that every week day at 4:30 Suzy prints out her end of the day report. This happens at specific times every day of the week. This kind of thing is easy to detect with fixed intervals. (Every day, every week day, every weekend day, every Tuesday, every 1st of the month, etc...) This is extremely simple to detect with predetermined intervals -- just create a curve of the estimated probability density function that's one week long and go back in time and average the curves (possibly a weighted average via a windowing function for better predictions).
If you want to get more sophisticated, find a way to automate the detection of such intervals. (Likely the data wouldn't be so overwhelming that you could just brute force this.)
An example time-independent pattern is that every time Mike in accounting prints out an invoice list sheet, he goes over to Johnathan who prints out a rather large batch of complete invoice reports a few hours later. This kind of thing is harder to detect because it's more free form. I recommend looking at various intervals of time (e.g. 30 seconds, 40 seconds, 50 seconds, 1 minute, 1.2 minutes, 1.5 minutes, 1.7 minutes, 2 minutes, 3 minutes, .... 1 hour, 2 hours, 3 hours, ....) and subsampling them via in a nice way (e.g. Lanczos resampling) to create a vector. Then use a vector-quantization style algorithm to categorize the "interesting" patterns. You'll need to think carefully about how you'll deal with certainty of the categories, though -- if your a resulting category has very little data in it, it probably isn't reliable. (Some vector quantization algorithms are better at this than others.)
Then, to create a prediction as to the likelihood of printing something in the future, look up the most recent activity intervals (30 seconds, 40 seconds, 50 seconds, 1 minute, and all the other intervals) via vector quantization and weight the outcomes based on their certainty to create a weighted average of predictions.
You'll want to find a good way to measure certainty of the time-dependent and time-independent outputs to create a final estimate.
This sort of thing is typical of predictive data compression schemes. I recommend you take a look at PAQ since it's got a lot of the concepts I've gone over here and can provide some very interesting insight. The source code is even available along with excellent documentation on the algorithms used.
You may want to take an entirely different approach from vector quantization and discretize the data and use something more like a PPM scheme. It can be very much simpler to implement and still effective.
I don't know what the time frame or scope of this project is, but this sort of thing can always be taken to the N-th degree. If it's got a deadline, I'd like to emphasize that you worry about getting something working first, and then make it work well. Something not optimal is better than nothing.
This kind of project is cool. This kind of project can get you a job if you wrap it up right. I'd recommend you do take your time, do it right, and post it up as function, open source, useful software. I highly recommend open source since you'll want to make a community that can contribute data source providers in more environments that you have access to, will to support, or time to support.
Best of luck!
I really don't see how a Markov model would be useful here. Markov models are typically employed when the event you're predicting is dependent on previous events. The canonical example, of course, is text, where a good Markov model can do a surprisingly good job of guessing what the next character or word will be.
But is there a pattern to when a user might print the next thing? That is, do you see a regular pattern of time between jobs? If so, then a Markov model will work. If not, then the Markov model will be a random guess.
In how to model it, think of the different time periods between jobs as letters in an alphabet. In fact, you could assign each time period a letter, something like:
A - 1 to 2 minutes
B - 2 to 5 minutes
C - 5 to 10 minutes
etc.
Then, go through the data and assign a letter to each time period between print jobs. When you're done, you have a text representation of your data, and that you can run through any of the Markov examples that do text prediction.
If you have an actual model that you think might be relevant for the problem domain, you should apply it. For example, it is likely that there are patterns related to day of week, time of day, and possibly date (holidays would presumably show lower usage).
Most raw statistical modelling techniques based on examining (say) time between adjacent events would have difficulty capturing these underlying influences.
I would build a statistical model for each of those known events (day of week, etc), and use that to predict future occurrences.
I think the predictive neural network would be a good approach for this task.
http://en.wikipedia.org/wiki/Predictive_analytics#Neural_networks
This method is also used for predicting f.x. weather forecasting, stock marked, sun spots.
There's a tutorial here if you want to know more about how it works.
http://www.obitko.com/tutorials/neural-network-prediction/
Think of a markov chain like a graph with vertex connect to each other with a weight or distance. Moving around this graph would eat up the sum of the weights or distance you travel. Here is an example with text generation: http://phpir.com/text-generation.
A Kalman filter is used to track a state vector, generally with continuous (or at least discretized continuous) dynamics. This is sort of the polar opposite of sporadic, discrete events, so unless you have an underlying model that includes this kind of state vector (and is either linear or almost linear), you probably don't want a Kalman filter.
It sounds like you don't have an underlying model, and are fishing around for one: you've got a nail, and are going through the toolbox trying out files, screwdrivers, and tape measures 8^)
My best advice: first, use what you know about the problem to build the model; then figure out how to solve the problem, based on the model.

Best approach for game animation?

I have a course exercise in OpenGL to write a game with simple animation of a few objects
While discussing with my partner our design options we've realized we have two major choices for how the animation is going to work, Either
Set a timer for a constant interval, say 30 msec, when the timer hits, calculate where objects should be and draw the frame. or -
Don't use a timer, just a normal loop that runs all the time and in each iteration check how much time passed, calculate where the objects should be according to the interval and draw the frame.
What should generally be the preferred approach? Does anyone have concrete experience with either approach?
Render and compute as fast as you can to get the maximum frame rate (as capped by the vertical sync)
Don't use a timer, they're not reliable < 50-100 ms on Windows. Check how much time has passed. (Usually, you need both delta t and an absolute value, depending on if your animation is physics or keyframe based.)
Also, if you want to be stable, use an upper/lower bound on your time-step, to go into slow-motion if a frame takes a few secs to render (disc access by another process?) or skip an update if you get two of them within say 10 ms.
Update
(Since this is a rather popular answer)
I usually prefer having a fixed time-step, as it makes everything more stable. Most physics engines are pretty robust against varying time, but other things, like particle systems or various simpler animations or even game logic, are easier to tune when having everything run in a fixed time step.
Update2
(Since I got 10 upvotes ;)
For further stability over long periods of running (>4 hours), you probably want to make sure you're not using floats/doubles to compute large time differences, since you lose precision doing so and your game's animations/physics will suffer. Use fixed point (or 64-bit microsecond-based) integers instead.
For the hairy details, I recommend reading A matter of precision by Tom Forsyth.
Read this page about game loops.
In short, set a timer:
Update the state of the game at a fixed frequency (something like every 25 ms = 1s/40fps). That includes the properties of the game objects, the input, the physics, the AI, etc. I call that the Model and the Controller. The need for a fixed update rate comes from the problems that may appear on too slow or too fast hardware (read the article). Some physics engine also prefer to update at a fixed frequency.
Update the frame (the graphics) of the game as fast as possible. That would be the View. That way you'll provide a smooth game. You can also enable vsync so the display will wait for the graphic card (usually it's 60 fps).
So each iteration of the loop, you check if you should update the model/controller. If it's late, update until they are up to date. Then, update the frame once and continue your loop.
The tricky part is that because of the different update rates, in fast hardware, the view will update several times before the model and controller. Therefore you should interpolate the position of your game objects depending on "where they would be if the game state would have been updated". It's really not that difficult.
You may have to maintain 2 different data structures : one for the model and one for the view. For instance you could have a scene graph for your model and a BSP tree for your view.
The second would be my preferred approach, because timers are often not as accurate as you're probably thinking and have all the latency and overhead of the event handling system. Accounting for the time interval will give your animations a much more consistent look and be robust if/when your frame rate dips.
Having said that, if your animation is based on a physics simulation (eg rigid body or ragdoll animation), then having a fixed update interval for your physics can greatly simplify the implementation.
Option 2 is by far preferred. It will scale nicely across differently performing hardware.
The book "Game Programming Gems 1" had a chapter that covers exactly what you need:
Frame Rate Independent Linear Interpolation
Use the second method. Did a game for my senior project and from experience, there is no guarantee that your logic will be done processing when the timer wants to fire.
I would be tempted to use the loop, since it will render as fast as possible (i.e. immediately after your physics computations are done). This will probably be more robust if you run into any slow-down in computation, which would cause timer firings to start queueing up. However, in case of such a slow-down you may have to put a cap on the time step computed between updates, since your physics engine may go unstable with too large a jump in time.
I'd suggest setting the system up to work on a "delta" that's passed in from outside.
When I did this, inside the animation format I based everything on real time values. The delta I passed in was 1 / 30 seconds, but it could be anything. With this system you can get either your first or second option, depending on whether you pass in a fixed delta or if you pass in the amount of time that has passed since the last frame.
As for which is better, it depends on your game and your requirements. Ideally all of your systems would be based around the same delta so that your physics match your animations. If your game drops frames at all and if all of your systems work with a variable delta, I'd suggest the variable delta is the better of the two solutions for end user experience.

Resources