Best way to store replay data - azure

I currently have a game replay file I'm analyzing and was wondering the best way to store the data so I can create complex queries which are fast. For example, every 50 milliseconds the analyzer returns a data structure in which you can access a snapshot of the current round details and current players status in the game such has what weapon hes holding, how much health he has, which players he has currently shot, etc. I want the ability to say: From the start of the replay file to 10000 milliseconds in, what were the player "Micheal" positions. How much damage has the player "Kyle" to other players from 10000ms to 20000ms in. I want the ability to store all the data im analyzing and replay it on a frontend using a API so you can visually replay it.
I can store metadata about the replay into the database such as: (Round 1, StartTime: 10000, EndTime: 30000), (Round 2, StartTime: 31000, EndTime: 37000). I can also store meta data on when a player was killed (Kyle, DeathTime: 31000, KilledBy: Micheal) or when a player was hurt (Kyle, HurtBy: Micheal, Damage: 10, Weapon: x).
To accomplish what I want to do of being able to create complex queries for different situations, do I need a combination of the two? Such as storing the millisecond by millisecond data in like a NoSql database / Document and then also parsing the total file and storing metadata like mentioned in the second paragraph into another database. Is it not feasible to store only the millisecond by millisecond data and then have the ability to create fast queries to parse what I want from it?

Sounds like a cool game that you are working on. The Microsoft Azure Table Storage (a NoSQL database) is very fast and would be perfect for your game. You can use Slazure (I coded Slazure BTW) which has a query language (a custom LINQ provider) included and which also stores the data in the Azure Table Storage. Here is how I would do it if I was using Slazure and the Microsoft Azure Table Storage NoSQL database for your game. I suggest that you would store one row in the PlayersTable table for each event where the PartitionKey is the player name and the RowKey is how many milliseconds into the game round you are - since each table is indexed on the PartitionKey column and sorted by the RowKey column all queries using these columns are very fast indeed. There is also columns created for the weapon used, position on the screen, health, round number, and which player was killed by whom. You could create a table to hold damage data using the same proposed method as below:
using SysSurge.Slazure;
using SysSurge.Slazure.Linq;
using SysSurge.Slazure.Linq.QueryParser;
namespace TableOperations
{
public class PlayerInfo
{
// List of weapons
public enum WeaponList {
Axe, Arrow, Knife, Sling
};
// Update a player with some new data
public void UpdatePlayerData(dynamic playersTable, DateTime gameStartedTime, int round, string playerName, WeaponList weapon, int healthPoints, int xPos, int yPos)
{
// Create an entity in the Players table using the player name as the PartitionKey, the entity is created if it doesn't already exist
var player = playersTable.Entity(playerName);
// Store the time the event was recorded for later as milliseconds since the game started.
// This means there is one row for each stored player event
player.Rowkey = ((DateTime.UtcNow.Ticks - gameStartedTime.Ticks)/10000).ToString("d19");
player.Round = round; // Round number
player.Weapon = (int)weapon; // Weapon carried by player. Example Axe
// Store player X and Y position coordinates on the screen
player.X = xPos;
player.Y = yPos;
// Number of health points, zero means player is dead
player.HealthPoints = healthPoints;
// Save the entity to the Azure Table Service storage
player.Save()
}
// Update a player with some new data
public void PlayerKilled(dynamic playersTable, DateTime gameStartedTime, int round, string playerName, string killedByPlayerName)
{
// Create an entity in the Players table using the player name as the PartitionKey, the entity is created if it doesn't already exist
var player = playersTable.Entity(playerName);
// Store the time the event was recorded for later as milliseconds since the game started.
// This means there is one row for each stored player event
player.Rowkey = ((DateTime.UtcNow.Ticks - gameStartedTime.Ticks)/10000).ToString("d19");
player.Round = round; // Round number
// Number of health points, zero means player is dead
player.HealthPoints = 0;
player.KilledByPlayerName = killedByPlayerName; // Killed by this player, example "Kyle"
// Save the entity to the Azure Table Service storage
player.Save()
}
// Get all the player positions between two time intervals
public System.Linq.IQueriable GetPlayerPositions(dynamic playersTable, string playerName, int fromMilliseconds, int toMilliseconds, int round)
{
return playersTable.Where("PrimaryKey == #0 && RowKey >= #1 && RowKey <= #2 && Round == #3",
playerName, fromMilliseconds.ToString("d19"), toMilliseconds.ToString("d19"), round).Select("new(X, Y)");
}
}
}
First you need to record when the game started and the round number:
var gameStartedTime = DateTime.UtcNow;
var round = 1; // Round #1
, and create a table in the NoQL database:
// Get a reference to the Table Service storage
dynamic storage = new DynStorage("UseDevelopmentStorage=true");
// Get reference to the Players table, it's created if it doesn't already exist
dynamic playersTable = storage.Players;
Now, during the game you can continuously update the player information like so:
UpdatePlayerData(playersTable, gameStartedTime, round, "Micheal", WeaponList.Axe, 45, 12313, 2332);
UpdatePlayerData(playersTable, gameStartedTime, round, "Kyle", WeaponList.Knife, 100, 13343, 2323);
If you need to wait 50 ms between each storage event you can do something like:
System.Threading.Thread.Sleep(50);
, and then store some more player event data:
UpdatePlayerData(playersTable, gameStartedTime, round, "Micheal", WeaponList.Axe, 12, 14555, 1990);
UpdatePlayerData(playersTable, gameStartedTime, round, "Kyle", WeaponList.Sling, 89, 13998, 2001);
When one of the players has died you can call the same method with zero health points and the name of the player that killed him/her:
PlayerKilled(playersTable, gameStartedTime, round, "Micheal", "Kyle");
Now, later in your game analyser you can query for all the positions from the start of the game (0 ms) to 10,000 ms into the game as follows:
// Get a reference to the table storage and the table
dynamic queryableStorage = new QueryableStorage<DynEntity>("UseDevelopmentStorage=true");
QueryableTable<DynEntity> queryablePlayersTable = queryableStorage.PlayersTable;
var playerPositionsQuery = GetPlayerPositions(queryablePlayersTable, "Micheal", 0, 10000, round);
// Cast the query result to a dynamic so that we can get access its dynamic properties
foreach (dynamic player in playerPositionsQuery)
{
// Show player positions in the console
Console.WriteLine("Player position: Name=" + player.PrimaryKey + ", Game time MS " + player.RowKey + ", X-position=" + player.X + ", Y-position=" + player.Y;
}

Related

How to make time slots using kotlin

"StartingTime":"11:00",
"EndingTime":"5:00"
Hello,i have a JSON response in which i have these two strings.What i want to do is I want to make time slots using these startingTime and EndingTime.BTW,these two can change for different responses.I want to make time slots with 2 hrs difference between them.Also I want to add an extra 2 hour after the EndingTime.
Example:
Startime = 11:00
EndingTime = 5:00
Time Slots I need = 11:00-1:00 , 1:00-3:00 , 3:00-5:00 , 5:00-7:00
Also once I get this time slots I want to store and add them in a spinner.
How can I achieve it.Thanks.
You can make a simple data class to represent a time slot.
data class TimeSlot(val startTime: LocalTime, val endTime: LocalTime)
And then write a function that splits it up into as many slots that will fit:
fun TimeSlot.divide(lengthHours: Long): List<TimeSlot> {
require(lengthHours > 0) { "lengthHours was $lengthHours. Must specify positive amount of hours."}
val timeSlots = mutableListOf<TimeSlot>()
var nextStartTime = startTime
while (true) {
val nextEndTime = nextStartTime.plusHours(lengthHours)
if (nextEndTime > endTime) {
break
}
timeSlots.add(TimeSlot(nextStartTime, nextEndTime))
nextStartTime = nextEndTime
}
return timeSlots
}
Note, this simple comparison nextEndTime > endTime won't handle a time range that crosses midnight. You'd have to make this a little more complicated if you want to handle that.
You can look up in other existing questions how to parse the JSON values into LocalTimes and how to populate a Spinner from a List.

Modelling Time Series data with tags

I'm currently working on a poc to model time series data.
The initial datapoint structure:
- the name of a sensor: 192.168.1.1:readCount
- a timestamp
- a value
I use the sensor name as rowid, the timestamp as column id. This approach works very fine.
However I want to add tags to add additional data.
public class Datapoint {
public String metricName;
public long timestampMs;
public long value;
public Map<String, String> tags = new HashMap<String, String>();
}
Datapoint datapoint = new Datapoint();
datapoint.metricName = "IMap.readCount";
datapoint.value = 10;
datapoint.timestampMs = System.currentTimeMillis();
datapoint.tags.put("cluster", "dev");
datapoint.tags.put("member", "192.168.1.1:5701");
datapoint.tags.put("id", "map1");
datapoint.tags.put("company", "Foobar");
I want to use it to say:
- aggregate all metrics for all different machines with the same id. E.g. if machine 1 has 10 writes for mapx, and machine2 did 20 writes for mapx, I want to know that 30.
- aggregate metrics for for all maps: if machine 1 did 20 writes on mapx and 30 writes on mapy, I want to know the total of 50.
The question is how I should model this.
I know that a composite can be used for the column id. So in theory I could add each tag as a an element in that composite. But can a column be efficiently searched for when it has a variable number of elements in the composite?
I know my question is a bit foggy, but I think this reflects my understanding of Cassandra since I just started with it.
#pveentjer
"I know that a composite can be used for the column id. So in theory I could add each tag as a an element in that composite. But can a column be efficiently searched for when it has a variable number of elements in the composite?"
There are some rules and restrictions when using multiple composites, read here and here
For CQL3, there are further limitations, read here

Query WadPerformanceCountersTable in Increments?

I am trying to query the WadPerformanceCountersTable generated by Azure Diagnostics which has a PartitionKey based on tick marks accurate up to the minute. This PartitionKey is stored as a string (which I do not have any control over).
I want to be able to query against this table to get data points for every minute, every hour, every day, etc. so I don't have to pull all of the data (I just want a sampling to approximate it). I was hoping to using the modulus operator to do this, but since the PartitionKey is stored as a string and this is an Azure Table, I am having issues.
Is there any way to do this?
Non-working example:
var query =
(from entity in ServiceContext.CreateQuery<PerformanceCountersEntity>("WADPerformanceCountersTable")
where
long.Parse(entity.PartitionKey) % interval == 0 && //bad for a variety of reasons
String.Compare(entity.PartitionKey, partitionKeyEnd, StringComparison.Ordinal) < 0 &&
String.Compare(entity.PartitionKey, partitionKeyStart, StringComparison.Ordinal) > 0
select entity)
.AsTableServiceQuery();
If you just want to get a single row based on two different time interval (now and N time back) you can use the following query which returns the single row as described here:
// 10 minutes span Partition Key
DateTime now = DateTime.UtcNow;
// Current Partition Key
string partitionKeyNow = string.Format("0{0}", now.Ticks.ToString());
DateTime tenMinutesSpan = now.AddMinutes(-10);
string partitionKeyTenMinutesBack = string.Format("0{0}", tenMinutesSpan.Ticks.ToString());
//Get single row sample created last 10 mminutes
CloudTableQuery<WadPerformanceCountersTable> cloudTableQuery =
(
from entity in ServiceContext.CreateQuery<PerformanceCountersEntity>("WADPerformanceCountersTable")
where
entity.PartitionKey.CompareTo(partitionKeyNow) < 0 &&
entity.PartitionKey.CompareTo(partitionKeyTenMinutesBack) > 0
select entity
).Take(1).AsTableServiceQuery();
The only way I can see to do this would be to create a process to keep the Azure table in sync with another version of itself. In this table, I would store the PartitionKey as a number instead of a string. Once done, I could use a method similar to what I wrote in my question to query the data.
However, this is a waste of resources, so I don't recommend it. (I'm not implementing it myself, either.)

CouchDB function to sample records at a given interval.

I have records with a time value and need to be able to query them for a span of time and return only records at a given interval.
For example I may need all the records from 12:00 to 1:00 in 10 minute intervals giving me 12:00, 12:10, 12:20, 12:30, ... 12:50, 01:00. The interval needs to be a parameter and it may be any time value. 15 minutes, 47 seconds, 1.4 hours.
I attempted to do this doing some kind of reduce but that is apparently the wrong place to do it.
Here is what I have come up with. Comments are welcome.
Created a view for the time field so I can query a range of times. The view outputs the id and the time.
function(doc) {
emit([doc.rec_id, doc.time], [doc._id, doc.time])
}
Then I created a list function that accepts a param called interval. In the list function I work thru the rows and compare the current rows time to the last accepted time. If the span is greater or equal to the interval I add the row to the output and JSON-ify it.
function(head, req) {
// default to 30000ms or 30 seconds.
var interval = 30000;
// get the interval from the request.
if (req.query.interval) {
interval = req.query.interval;
}
// setup
var row;
var rows = [];
var lastTime = 0;
// go thru the results...
while (row = getRow()) {
// if the time from view is more than the interval
// from our last time then add it.
if (row.value[1] - lastTime > interval) {
lastTime = row.value[1];
rows.push(row);
}
}
// JSON-ify!
send(JSON.stringify({'rows' : rows}));
}
So far this is working well. I will test against some large data to see how the performance is. Any comments on how this could be done better or would this be the correct way with couch?
CouchDB is relaxed. If this is working for you, then I'd say stick with it and focus on your next top priority.
One quick optimization is to try not to build up a final answer in the _list function, but rather send() little pieces of the answer as you know them. That way, your function can run on an unlimited result size.
However, as you suspected, you are using a _list function basically to do an ad-hoc query which could be problematic as your database size grows.
I'm not 100% sure what you need, but if you are looking for documents within a time frame, there's a good chance that emit() keys should primarily sort by time. (In your example, the primary (leftmost) sort value is doc.rec_id.)
For a map function:
function(doc) {
var key = doc.time; // Just sort everything by timestamp.
emit(key, [doc._id, doc.time]);
}
That will build a map of all documents, ordered by the time timestamp. (I will assume the time value is like JSON.stringify(new Date), i.e. "2011-05-20T00:34:20.847Z".
To find all documents within, a 1-hour interval, just query the map view with ?startkey="2011-05-20T00:00:00.000Z"&endkey="2011-05-20T01:00:00.000Z".
If I understand your "interval" criteria correctly, then if you need 10-minute intervals, then if you had 00:00, 00:15, 00:30, 00:45, 00:50, then only 00:00, 00:30, 00:50 should be in the final result. Therefore, you are filtering the normal couch output to cut out unwanted results. That is a perfect job for a _list function. Simply use req.query.interval and only send() the rows that match the interval.

JSR 256 battery events

How can I detect whenever the power cord is unplugged from electrical socket using JSR 256?
You would add javax.microedition.io.Connector.sensor to the API Permissions tab of the Application Descriptor of the project properties.
From a quick look at the specifications of the JSR:
(you might want to look for code examples, starting with Appendix D of the spec itself, the latest JavaME SDK, Sony Ericsson developer website, then google)
As always, I would be worried about fragmentation in the diverse implementations of the JSR, but here's my first idea:
import javax.microedition.sensor.*;
SensorInfo[] powerSensorInfoArray = SensorManager.findSensors("power","ambient");
//let's assume there is one SensorInfo in the array.
//open a connection to the sensor.
SensorConnection connection = (SensorConnection)Connector.open(powerSensorInfoArray[0].getUrl(), Connector.READ);
// add a DataListener to the connection
connection.setDataListener(new MyDataListener(), 1);
// implement the data listener
public class MyDataListener implements DataListener {
public void dataReceived(SensorConnection aSensor, Data[] aDataArray, boolean isDataLost) {
//let's assume there is only one channel for the sensor and no data was lost.
// figure out what kind of data the channel provides.
int dataType = aDataArray[0].getChannelInfo().getDataType();
//now, I suggest you switch on dataType and print the value on the screen
// experimentation on the JSR256 implementation you're targetting seems to be
// the only way to figure out out power data is formatted and what values mean.
//only one of the following 3 lines will work:
double[] valueArray = aDataArray[0].getDoubleValues();
int[] valueArray = aDataArray[0].getIntValues();
Object[] valueArray = aDataArray[0].getObjectValues();
// let's assume one value in the valueArray
String valueToPrint = "" + valueArray[0];
// see what happens with that and you plug or unplug the power supply cable.
}
}
You'll need to add javax.microedition.io.Connector.sensor to your MIDlet permissions.
-------EDIT------
Documentation from the JSR-256 implementation on Sony-Ericsson Satio phone (S60 5th edition):
The battery charge sensor has the following characteristics:
Quantity: battery_charge
Context type: device
URL: sensor:battery_charge;contextType=device;model=SonyEricsson
Channels: (index: name, range, unit)
0: battery_charge, 0-100, percent
1: charger_state, 0-1, boolean

Resources