Sharedobject in BackGrounde Worker as3 Air (android,ios) - multithreading

im Developing an Air app for android and ios and im using workers too preform a heavy task in background (two swfs),i want to have SharedObjects in the worker swf(the background worker) ,is this possible ? or the data will be lost?

Everything you need to write to the same SharedObject is to specify the same path for both like
mySO = SharedObject.getLocal("myObjectFile","/");
more info here http://help.adobe.com/en_US/as3/dev/WS5b3ccc516d4fbf351e63e3d118a9b90204-7d80.html
Here is a short code to make you sure that data won't be lost, workers are actually doing the same stuff as multiple flash players ran simultaneously. Just run 2 or more swf and see the result:
import flash.net.SharedObject;
var iterations:int = 100
function witeToSo()
{
var mySO:SharedObject = SharedObject.getLocal("myObjectFile", "/");
if (iterations > 0)
{
if (!mySO.data.str) mySO.data.str = ""
mySO.data.str += int(Math.random() * 10);
iterations--;
}
txt.text = "str: " + mySO.data.str + " symbolsTotal:" + (mySO.data.str.length) + "\n";
setTimeout(witeToSo, Math.random()*100);
}
setTimeout(witeToSo, 2000);
Also you need to think of how to synchronise your threads in case you want to write the data in specific order

Related

Codename One sound Media.play() memory leak?

My app uses some short sounds for user feedback. I use the following code:
private void playSound(String fileName) {
try {
FileSystemStorage fss = FileSystemStorage.getInstance();
String sep = fss.getFileSystemSeparator() + "";
String soundDir; // sounds must be in a directory
if (fss.getAppHomePath().endsWith(sep)) {
soundDir = fss.getAppHomePath() + "sounds"; // device
} else {
soundDir = fss.getAppHomePath() + sep + "sounds"; // simulator/windows
}
if (!fss.exists(soundDir)) {
// first time a sound is played: create directory
fss.mkdir(soundDir);
}
String filePath = soundDir + sep + fileName;
if (!fss.exists(filePath)) {
// first time this sound is played: copy from resources (place file in <project>/src)
InputStream is = Display.getInstance().getResourceAsStream(getClass(), "/" + fileName);
OutputStream os = fss.openOutputStream(filePath);
com.codename1.io.Util.copy(is, os);
}
Media media = MediaManager.createMedia(filePath, false);
//media.setVolume(100);
media.play();
} catch (IOException ex) {
log("Error playing " + fileName + " " + ex.getMessage());
}
}
Example call:
playSound("error.mp3");
This works fine on devices and in the simulator. However, if I do a long automatic test in the simulator (using Windows), playing a sound about every second,
this eats up all the RAM until Windows crashes. The Windows task manager, however, shows no exceptional memory usage of NetBeans and the Java process.
So my questions are: Is my code correct? Can this happen on devices too? Or else is there a way to prevent this in the simulator/Windows?
P.S.
I also tried the code from How to bundle sounds with Codename One?. That has the same problem and also
some sounds get lost (are not played).
I also tried the simple code from Codename One - Play a sound but that doesn't work.
We generally recommend keeping the Media instance for this sort of use case.
But if you can't just make sure to call cleanup when you're done:
MediaManager.addCompletionHandler(media, () -> media.cleanup());
media.play();

Building a chat app: How to get time

I am building a chat app currently with PubNub. The problem now is from the app/frontend point of view, how should it get the time (server time). If every message is sent to the server, I could get the server time there. But with a 3rd party service like PubNub, how can I manage this? Since app sends messages to PubNub rather than my server. I dont want to rely on local time as users might have inaccurate clocks.
The simplest solution I thought of is: When app starts up, get server time. Record the difference between local time and server time (diff = Date.now() - serverTime). When sending messages, the time will be Date.now() - diff. Is this correct so far?
I guess this solution assumes 0 transmission (or latency) time? Is there a more correct or recommended way to implement this?
Your use case is probably the reason why pubnub.time() exists.
In fact, they even have a code example describing your drift calculation.
https://github.com/pubnub/javascript/blob/1fa0b48227625f92de9460338c222152c853abda/examples/time-drift-detla-detection/drift-delta-detection.html
// Drift Functions
function now(){ return+new Date }
function clock_drift(cb) {
clock_drift.start = now();
PUBNUB.time(function(timetoken){
var latency = (now() - clock_drift.start) / 2
, server_time = (timetoken / 10000) + latency
, local_time = now()
, drift = local_time - server_time;
cb(drift);
});
if (clock_drift.ival) return;
clock_drift.ival = setInterval( function(){clock_drift(cb)}, 1000 );
}
// This is how you use the code
// Periodically Get Latency in Miliseconds
clock_drift(function(latency){
var out = PUBNUB.$('latency');
out.innerHTML = "Clock Drift Delta: " + latency + "ms";
// Flash Update
PUBNUB.css( out, { background : latency > 2000 ? '#f32' : '#5b5' } );
setTimeout( function() {
PUBNUB.css( out, { background : '#444' } );
}, 300 );
});

Multiple thread completion time measurement

Use case is: I have a huge log file, which I'm reading on main thread chunk by chunk (equal size, IO read). Every chunk read approximately takes 1s in my test machine. After reading each chunk I'm using a threadpool to create a thread for each chunk to put it in 2 DB instances. Now I have 2 challenges:
I have to alternatively insert chunks into 2 DBS. i.e. odd chunks go to 1st DB and even chunks go to 2nd DB. I don't have anything in the chunk model to denote me the number of chunk on which I can depend. I tried to create a wrapper on that chunk model to have a "chunkCount" but where do I increment the chunkCount?
How do I measure the time for each insert which would be running on different threads from the threadpool?
Following code I tried on experiment basis, but it's not yielding any result:
logEventsChunk = logFetcher.GetNextLogEventsChunk();
chunkModel = new LogEventChunkModel();
stw = new Stopwatch();
chunkModel.ChunkCount = chunkCount;
chunkModel.LogeventChunk = logEventsChunk;
//chunkCount++;
ThreadPool.QueueUserWorkItem(new WaitCallback(delegate(object state)
{ InsertChunk(chunkModel, collection, secondCollection, stw); }), null);
The InsertChunk method is here:
private void InsertChunk(LogEventChunkModel logEventsChunk, MongoCollection<LogEvent> collection, MongoCollection<LogEvent> secondCollection,Stopwatch stw)
{
chunkCount++;
stw.Start();
MongoInsertOptions options = new MongoInsertOptions();
options.WriteConcern = WriteConcern.Unacknowledged;
options.CheckElementNames = true;
string db = string.Empty;
{
//DateTime dtWrite = DateTime.Now;
if (logEventsChunk.ChunkCount % 2 == 0)
{
DateTime dtWrite1 = DateTime.Now;
collection.InsertBatch(logEventsChunk.LogeventChunk.LogEvents, options);
db = "FirstDB";
//Console.WriteLine("Time taken to write the chunk: " + DateTime.Now.Subtract(dtWrite1).TotalSeconds.ToString() + " s. " + db);
}
else
{
DateTime dtWrite2 = DateTime.Now;
secondCollection.InsertBatch(logEventsChunk.LogeventChunk.LogEvents, options);
db = "SecondDB";
//Console.WriteLine("Time taken to write the chunk: " + DateTime.Now.Subtract(dtWrite2).TotalSeconds.ToString() + " s. " + db);
}
Console.WriteLine("Thread Completed: {0} **********", Thread.CurrentThread.GetHashCode() );
stw.Stop();
Console.WriteLine("Time taken to write the chunk: " + stw.ElapsedMilliseconds + " ms. " + db + " Chunk Count: " + logEventsChunk.ChunkCount);
stw.Reset();
//+ "Chunk Count: " + chunkCount.ToString()
//Console.WriteLine("Time taken to write the chunk: " + DateTime.Now.Subtract(dtWrite).TotalSeconds.ToString() + " s. "+db);
//mongoDBInsertionTotalTime += DateTime.Now.Subtract(dtWrite).TotalSeconds;
}
}
Please ignore those commented lines since they are part of some experiments only.
Rather than starting a new thread for each insertion, and trying to make the thread figure out which database to write to, start two persistent threads, each of which writes to a single database. Those threads get their data from queues. This is a pretty standard producer/consumer setup using BlockingCollection<T>.
So, you have:
// Maximum number of items in queue (to avoid out of memory errors)
const int MaxQueueSize = 10000;
BlockingCollection<LogEventChunkModel> Db1Queue = new BlockingCollection<LogEventChunkModel>(MaxQueueSize);
BlockingCollection<LogEventChunkModel> Db2Queue = new BlockingCollection<LogEventChunkModel>(MaxQueueSize);
In your main thread, start the database update threads:
var t1 = new Thread(DbWriteThreadProc);
t1.Start(new Tuple<string, BlockingCollection<LogEventChunkModel>>("FirstDB", Db1Queue));
var t2 = new Thread(DbWriteThreadProc);
t2.Start(new Tuple<string, BlockingCollection<LogEventChunkModel>>("SecondDb", Db2Queue));
Then, begin reading the log file and placing alternate chunks into the queues:
int chunk = 0;
while (!EndOfLogFile)
{
var chunk = GetNextChunk();
if ((chunk % 0) == 0)
Db1Queue.Add(chunk);
else
Db2Queue.Add(chunk);
++chunk;
}
// end of data, so mark the queues as complete
Db1Queue.CompleteAdding();
Db2Queue.CompleteAdding();
// and wait for threads to complete processing the queues
t1.Join();
t2.Join();
Your write thread proc is pretty simple. All it does is service the queue and write to the database:
void DbWriteThreadProc(object state)
{
// passed object is a Tuple<string, BlockingCollection>
// Get the items from it
var threadData = (Tuple<string, BlockingCollection>)state;
string dbName = threadData.Item1;
BlockingCollection<LogEventChunk> queue = threadData.Item2;
// now read the queue and write to the database
foreach (var chunk in queue.GetConsumingEnumerable())
{
var sw = Stopwatch.StartNew();
// write chunk to the database.
sw.Stop();
Console.WriteLine("Time to write = {0:N0} ms", sw.ElapsedMilliseconds);
}
}
GetConsumingEnumerable does a non-busy wait on the queue, so it's not continually polling. The loop will complete when the queue is empty and the queue is marked as complete for adding (which is why the main thread calls CompleteAdding).
This approach has several advantages over what you had. In particular, it simplifies determining which database chunks get written to. In addition, it uses at most three threads and guarantees that chunks are added to the database in the same order in which they were read from the log file. Your approach using QueueUserWorkItem does not guarantee insertion order. It also creates a new thread for each insertion, and could end up with a huge number of concurrent threads.

Why does the Node.js scripts console close instantly in Windows 8?

I've tried nearly every example for scripts I can find. Every sample opens the terminal for a split second. Even this closes as soon as input is entered. Is this normal?
var rl = require('readline');
var prompts = rl.createInterface(process.stdin, process.stdout);
prompts.question("How many servings of fruits and vegetables do you eat each day? ", function (servings) {
var message = '';
if (servings < 5) {
message = "Since you're only eating " + servings +
" right now, you might want to start eating " + (5 - servings) + " more.";
} else {
message = "Excellent, your diet is on the right track!";
}
console.log(message);
process.exit();
});
There are 2 options that control this in Tools/Options/Node.js Tools/General:
Wait for input when process exists abnormally
Wait for input when process exists normally
Taken from https://nodejstools.codeplex.com/discussions/565665

Node JS with CouchDB for lots o' parsing

My team and I are playing around with NodeJS (with jsdom/jQuery) and parsing a lot of HTML documents stored in CouchDB. NodeJS is single threaded so having 8 cores in a serve does not help us at all initially, this is where I was wondering how to best create child processes (workers perhaps?) to process the individual file as it's pulled out from CouchDB?
Here is my thought process:
Main NodeJS script loops through CouchDB view getting the HTML files from documents every X minutes
Spawn a process to parse (jsdom/jQuery) and store the results from each HTML file
We aren't running a webserver at all to handle any of this (all command line) so I am unsure of how to handle this outside of a generic "set up CRON to just run each parsing job seperately". It seems that workers are generally used to process requests coming in from a webserver.
Thoughts?
Use the cluster
var cluster = require("cluster");
var numCPUs = require('os').cpus().length;
var htmlDocs = [...];
if (cluster.isMaster) {
// Fork workers.
for (var i = 0; i < numCPUs; i++) {
cluster.fork();
}
cluster.on('death', function(worker) {
console.log('worker ' + worker.pid + ' died');
});
} else {
for (var i = process.env.NODE_WORKER_ID; i < htmlDocs.length; i+=numCPUs) {
couch.doWork(htmlDocs[i]);
}
}
This is a classic case of doing work on members in an array and then splitting that work out over multiple processes by having each process do a subset of the array.
Note how we increment i by number of processes. This means worker 1 does 1st, 5th, 9th, etc, worker 2 does 2nd, 6th, 10th, etc.

Resources