Worker stuck in a Sandbox? - multithreading

Trying to figure out why I can login with my rest API just fine on the main thread but not in a worker. All communication channels are operating fine and I am able to load it up no problem. However, when it tries to send some data it just hangs.
[Embed(source="../bin/BGThread.swf", mimeType="application/octet-stream")]
private static var BackgroundWorker_ByteClass:Class;
public static function get BackgroundWorker():ByteArray
{
return new BackgroundWorker_ByteClass();
}
On a test script:
public function Main()
{
fBCore.init("secrets", "my-firebase-id");
trace("Init");
//fBCore.auth.addEventListener(FBAuthEvent.LOGIN_SUCCES, hanldeFBSuccess);
fBCore.auth.addEventListener(AuthEvent.LOGIN_SUCCES, hanldeFBSuccess);
fBCore.auth.addEventListener(IOErrorEvent.IO_ERROR, handleIOError);
fBCore.auth.email_login("admin#admin.admin", "password");
}
private function handleIOError(e:IOErrorEvent):void
{
trace("IO error");
trace(e.text); //Nothing here
}
private function hanldeFBSuccess(e:AuthEvent):void
{
trace("Main login success.");
trace(e.message);//Complete success.
}
When triggered by a class via an internal worker channel passed from Main on init:
Primordial:
private function handleLoginClick(e:MouseEvent):void
{
login_mc.buttonMode = false;
login_mc.play();
login_mc.removeEventListener(MouseEvent.CLICK, handleLoginClick);
log("Logging in as " + email_mc.text_txt.text);
commandChannel.send([BGThreadCommand.LOGIN, email_mc.text_txt.text, password_mc.text_txt.text]);
}
Worker:
...
case BGThreadCommand.LOGIN:
log("Logging in with " + message[1] + "::" + message[2]); //Log goes to a progress channel and comes to the main thread reading the outputs successfully.
fbCore.auth.email_login(message[1], message[2]);
fbCore.auth.addEventListener(AuthEvent.LOGIN_SUCCES, loginSuccess); //Nothing
fbCore.auth.addEventListener(IOErrorEvent.IO_ERROR, handleLoginIOError); //Fires
break;
Auth Rest Class: https://github.com/sfxworks/FirebaseREST/blob/master/src/net/sfxworks/firebaseREST/Auth.as
Is this a worker limitation or a security sandbox issue? I have a deep feeling it is the latter of the two. If that's the case how would I load the worker in a way that also gives it the proper permissions to act?

Completely ignored the giveAppPrivelages property in the createWorker function. Sorry Stackoverflow. Sometimes I make bad questions when I get little (or none in this case) sleep the night before.

Related

Cocos2d-x Multithreading sceanrio crashes the game

my scenario is simple:i made a game using cocos2d-x and i want to download images (FB and Google play) for multi player users and show them once the download is done as texture for a button.
in ideal world, things work as expected.
things get tricky when those buttons got deleted before the download is done.
so the callback function is in weird state and then i get signal 11 (SIGSEGV), code 1 (SEGV_MAPERR)
and the app crashes
This is how i implmented it
I have a Layout class called PlayerIcon. the cpp looks like this
void PlayerIcon::setPlayer(string userName, string displayName, string avatarUrl){
try {
//some code here
downloadAvatar(_userName, _avatarUrl);
//some code here
}
catch(... ){
}
}
void PlayerIcon::downloadAvatar(std::string _avatarFilePath,std::string url) {
if(!isFileExist(_avatarFilePath)) {
try {
auto downloader = new Downloader();
downloader->onFileTaskSuccess=CC_CALLBACK_1(PlayerIcon::on_download_success,this);
downloader->onTaskError=[&](const network::DownloadTask& task,int errorCode,
int errorCodeInternal,
const std::string& errorStr){
log("error while saving image");
};
downloader->createDownloadFileTask(url,_avatarFilePath,_avatarFilePath);
}
catch (exception e)
{
log("error while saving image: test");
}
} else {
//set texture for button
}
}
void PlayerIcon::on_download_success(const network::DownloadTask& task){
_isDownloading = false;
Director::getInstance()->getScheduler()-> performFunctionInCocosThread(CC_CALLBACK_0(PlayerIcon::reload_avatar,this));
}
void PlayerIcon::reload_avatar(){
try {
// setting texture in UI thread
}
catch (...) {
log("error updating avatar");
}
}
As i said, things works fine until PlayerIcon is deleted before the download is done.
i dont know what happens when the call back of the download task point to a method of un object that s deleted (or flagged for deletion).
i looked in the downloader implementation and it doesn't provide any cancellation mechanism
and i'm not sure how to handle this
Also, is it normal to have 10% crash rate on google console for a cocos2dx game
any help is really appreciated
Do you delete de Downloader in de destructor of the PlayerIcon?
there is a destroy in the apple implementation witch is trigered by the destructor.
-(void)doDestroy
{
// cancel all download task
NSEnumerator * enumeratorKey = [self.taskDict keyEnumerator];
for (NSURLSessionDownloadTask *task in enumeratorKey)
{
....
DownloaderApple::~DownloaderApple()
{
DeclareDownloaderImplVar;
[impl doDestroy];
DLLOG("Destruct DownloaderApple %p", this);
}
In the demo code of cocos2d-x: DownloaderTest.cpp they use:
std::unique_ptr<network::Downloader> downloader;
downloader.reset(new cocos2d::network::Downloader());
instead of:
auto downloader = new Downloader();
It looks like you are building this network code as part of your scene tree. If you do a replaceScene/popScene...() call, while the async network software is running in the background, this will cause the callback to disappear (the scene will be deleted from the scene-stack) and you will get a SEGFAULT from this.
If this is the way you've coded it, then you might want to extract the network code to a global object (singleton) where you queue the requests and then grab them off the internet saving the results in the global-object's output queue (or their name and location) and then let the scene code check to see if the avatar has been received yet by inquiring on the global-object and loading the avatar sprite at this point.
Note, this may be an intermittent problem which depends on the speed of your machine and the network so it may not be triggered consistently.
Another solution ...
Or you could just set your function pointers to nullptr in your PlayerIcon::~PlayerIcon() (destructor):
downloader->setOnFileTaskSuccess(nullptr);
downloader->setOnTaskProgress(nullptr);
Then there will be no attempt to call your callback functions and the SEGFAULT will be avoided (Hopefully).

P4API.net: how to use P4Callbacks delegates

I am working on a small tool to schedule p4 sync daily at specific times.
In this tool, I want to display the outputs from the P4API while it is running commands.
I can see that the P4API.net has a P4Callbacks class, with several delegates: InfoResultsDelegate, TaggedOutputDelegate, LogMessageDelegate, ErrorDelegate.
My question is: How can I use those, I could not find a single example online of that. A short example code would be amazing !
Note: I am quite a beginner and have never used delegates before.
Answering my own questions by an example. I ended up figuring out by myself, it is a simple event.
Note that this only works with P4Server. My last attempt at getting TaggedOutput from a P4.Connection was unsuccessful, they were never triggered when running a command.
So, here is a code example:
P4Server p4Server = new P4Server(syncPath);
p4Server.TaggedOutputReceived += P4ServerTaggedOutputEvent;
p4Server.ErrorReceived += P4ServerErrorReceived;
bool syncSuccess = false;
try
{
P4Command syncCommand = new P4Command(p4Server, "sync", true, syncPath + "\\...");
P4CommandResult rslt = syncCommand.Run();
syncSuccess=true;
//Here you can read the content of the P4CommandResult
//But it will only be accessible when the command is finished.
}
catch (P4Exception ex) //Will be caught only when the command has completely failed
{
Console.WriteLine("P4Command failed: " + ex.Message);
}
And the two methods, those will be triggered while the sync command is being executed.
private void P4ServerErrorReceived(uint cmdId, int severity, int errorNumber, string data)
{
Console.WriteLine("P4ServerErrorReceived:" + data);
}
private void P4ServerTaggedOutputEvent(uint cmdId, int ObjId, TaggedObject Obj)
{
Console.WriteLine("P4ServerTaggedOutputEvent:" + Obj["clientFile"]);
}

Akka.net Ask timeout when used in Azure WebJob

At work we have some code in a Azure WebJob where we use Rabbit
The basic workflow is this
A message arrives on RabbitMQ Queue
We have a message handler for the incoming message
Within the message handler we start a top level (user) supervisor actor where we "ask" it to handle the message
The supervisor actor hierarchy is like this
And the relevant top level code is something like this (this is the WebJob code)
static void Main(string[] args)
{
try
{
//Bootstrap akka IoC resolver well ahead of any actor usages
new AutoFacDependencyResolver(ContainerOperations.Instance.Container, ContainerOperations.Instance.Container.Resolve<ActorSystem>());
var system = ContainerOperations.Instance.Container.Resolve<ActorSystem>();
var busQueueReader = ContainerOperations.Instance.Container.Resolve<IBusQueueReader>();
var dateTime = ContainerOperations.Instance.Container.Resolve<IDateTime>();
busQueueReader.AddHandler<ProgramCalculationMessage>("RabbitQueue", x =>
{
//This is code that gets called whenever we have a RabbitMQ message arrive
//This is code that gets called whenever we have a RabbitMQ message arrive
//This is code that gets called whenever we have a RabbitMQ message arrive
//This is code that gets called whenever we have a RabbitMQ message arrive
//This is code that gets called whenever we have a RabbitMQ message arrive
try
{
//SupervisorActor is a singleton
var supervisorActor = ContainerOperations.Instance.Container.ResolveNamed<IActorRef>("SupervisorActor");
var actorMessage = new SomeActorMessage();
var supervisorRunTask = runModelSupervisorActor.Ask(actorMessage, TimeSpan.FromMinutes(25));
//we want to wait this guy out
var supervisorRunResult = supervisorRunTask.GetAwaiter().GetResult();
switch (supervisorRunResult)
{
case CompletedEvent completed:
{
break;
}
case FailedEvent failed:
{
throw failed.Exception;
}
}
}
catch (Exception ex)
{
_log.Error(ex, "Error found in Webjob");
//throw it for the actual RabbitMqQueueReader Handler so message gets NACK
throw;
}
});
Thread.Sleep(Timeout.Infinite);
}
catch (Exception ex)
{
_log.Error(ex, "Error found");
throw;
}
}
And this is the relevant IOC code (we are using Autofac + Akka.NET DI for Autofac)
builder.RegisterType<SupervisorActor>();
_actorSystem = new Lazy<ActorSystem>(() =>
{
var akkaconf = ActorUtil.LoadConfig(_akkaConfigPath).WithFallback(ConfigurationFactory.Default());
return ActorSystem.Create("WebJobSystem", akkaconf);
});
builder.Register<ActorSystem>(cont => _actorSystem.Value);
builder.Register(cont =>
{
var system = cont.Resolve<ActorSystem>();
return system.ActorOf(system.DI().Props<SupervisorActor>(),"SupervisorActor");
})
.SingleInstance()
.Named<IActorRef>("SupervisorActor");
The problem
So the code is working fine and doing what we want it to, apart from the Akka.Net "ask" timeout shown above in the WebJob code.
Annoyingly this seems to work fine if I try and run the webjob locally. Where I can simulate a "ask" timeout by providing a new supervisorActor that simply doesn't EVER respond with a message back to the "Sender".
This works perfectly running on my machine, but when we run this code in Azure, we DO NOT see a Timeout for the "ask" even though one of our workflow runs exceeded the "ask" timeout by a mile.
I just don't know what could be causing this behavior, does anyone have any ideas?
Could there be some Azure specific config value for the WebJob that I need to set.
The answer to this was to use the async rabbit handlers which apparently came out in V5.0 of the C# rabbit client. The offical docs still show the sync usage (sadly).
This article is quite good : https://gigi.nullneuron.net/gigilabs/asynchronous-rabbitmq-consumers-in-net/
Once we did this, all was good

Redis Connections May Not be Closing with c#

I'm connecting to Azure Redis and they show me the number of open connections to my redis server. I've got the following c# code that encloses all my Redis sets and gets. Should this be leaking connections?
using (var connectionMultiplexer = ConnectionMultiplexer.Connect(connectionString))
{
lock (Locker)
{
redis = connectionMultiplexer.GetDatabase();
}
var o = CacheSerializer.Deserialize<T>(redis.StringGet(cacheKeyName));
if (o != null)
{
return o;
}
lock (Locker)
{
// get lock but release if it takes more than 60 seconds to complete to avoid deadlock if this app crashes before release
//using (redis.AcquireLock(cacheKeyName + "-lock", TimeSpan.FromSeconds(60)))
var lockKey = cacheKeyName + "-lock";
if (redis.LockTake(lockKey, Environment.MachineName, TimeSpan.FromSeconds(10)))
{
try
{
o = CacheSerializer.Deserialize<T>(redis.StringGet(cacheKeyName));
if (o == null)
{
o = func();
redis.StringSet(cacheKeyName, CacheSerializer.Serialize(o),
TimeSpan.FromSeconds(cacheTimeOutSeconds));
}
redis.LockRelease(lockKey, Environment.MachineName);
return o;
}
finally
{
redis.LockRelease(lockKey, Environment.MachineName);
}
}
return o;
}
}
}
You can keep connectionMultiplexer in a static variable and not create it for every get/set. That will keep one connection to Redis always opening and proceed your operations faster.
Update:
Please, have a look at StackExchange.Redis basic usage:
https://github.com/StackExchange/StackExchange.Redis/blob/master/Docs/Basics.md
"Note that ConnectionMultiplexer implements IDisposable and can be disposed when no longer required, but I am deliberately not showing using statement usage, because it is exceptionally rare that you would want to use a ConnectionMultiplexer briefly, as the idea is to re-use this object."
It works nice for me, keeping single connection to Azure Redis (sometimes, create 2 connections, but this by design). Hope it will help you.
I was suggesting try using Close (or CloseAsync) method explicitly. In a test setting you may be using different connections for different test cases and not want to share a single multiplexer. A search for public code using Redis client shows a pattern of Close followed by Dispose calls.
Noting in the XML method documentation of Redis client that close method is described as doing more:
//
// Summary:
// Close all connections and release all resources associated with this object
//
// Parameters:
// allowCommandsToComplete:
// Whether to allow all in-queue commands to complete first.
public void Close(bool allowCommandsToComplete = true);
//
// Summary:
// Close all connections and release all resources associated with this object
//
// Parameters:
// allowCommandsToComplete:
// Whether to allow all in-queue commands to complete first.
[AsyncStateMachine(typeof(<CloseAsync>d__183))]
public Task CloseAsync(bool allowCommandsToComplete = true);
...
//
// Summary:
// Release all resources associated with this object
public void Dispose();
And then I looked up the code for the client, found it here:
https://github.com/StackExchange/StackExchange.Redis/blob/master/src/StackExchange.Redis/ConnectionMultiplexer.cs
And we can see Dispose method calling Close (not the usual override-able protected Dispose(bool)), further more with the wait for connections to close set to true. It appears to be an atypical dispose pattern implementation in that by trying all the closure and waiting on them it is chancing to run into exception while Dispose method contract is supposed to never throw one.

Nested IMessageQueueClient publish using Servicestack InMemoryTransientMessageService

We are using InMemoryTransientMessageService to chain several one-way notification between services. We can not use Redis provider, and we do not really need it so far. Synchronous dispatching is enough.
We are experimenting problems when using a publish inside a service that is handling another publish. In pseudo-code:
FirstService.Method()
_messageQueueClient.Publish(obj);
SecondService.Any(obj)
_messageQueueClient.Publish(obj);
ThirdService.Any(obj)
The SecondMessage is never handled. In the following code of ServiceStack TransientMessageServiceBase, when the second message is processed, the service "isRunning" so it does not try to handled the second:
public virtual void Start()
{
if (isRunning) return;
isRunning = true;
this.messageHandlers = this.handlerMap.Values.ToList().ConvertAll(
x => x.CreateMessageHandler()).ToArray();
using (var mqClient = MessageFactory.CreateMessageQueueClient())
{
foreach (var handler in messageHandlers)
{
handler.Process(mqClient);
}
}
this.Stop();
}
I'm not sure about the impact of changing this behaviour in order to be able to nest/chain message publications. Do you think it is safe to remove this check? Some other ideas?
After some tests, it seems there is no problem in removing the "isRunning" control. All nested publications are executed correctly.

Resources