SOGO: No child available to handle incoming request / pid xxx has been hanging in the same request for x minutes - activesync

I am trying to add SOGo to an already working server with Postfix + Dovecot.
The server is Centos 7, 2 core with 3Gb RAM with less than 10 users. Sogo installed from official repo: sogo-2.3.8-1.el7.centos.x86_64
/etc/sysconfig/sogo is set up for 10 workers: PREFORK=10"
Sogo is configured with 10 workers with this configuration:
WOListenQueueSize=10;
WOWatchDogRequestTimeout=60;
SOGoMaximumPingInterval = 354;
SOGoMaximumSyncInterval = 354;
SOGoInternalSyncInterval = 15;
SOGoMaximumSyncWindowSize = 50;
SOGoMaximumSyncResponseSize = 2048;
The problem seems to be whith activesync clients (ms outlook). SOGo processes starts eating all the RAM and sometimes hangs (process can't be killed with signal 15). The problem seems to be related to The log file reports:
Feb 19 13:30:26 sogod [13164]: Sleeping 15 seconds while detecting changes in Ping...
Feb 19 13:30:26 sogod [13163]: Sleeping 15 seconds while detecting changes in Ping...
Feb 19 13:30:26 sogod [13150]: [ERROR] No child available to handle incoming request!
Feb 19 13:30:26 sogod [13155]: Sleeping 15 seconds while detecting changes in Ping...
Feb 19 13:30:27 sogod [13152]: Sleeping 15 seconds while detecting changes in Ping...
Feb 19 13:30:27 sogod [13150]: [WARN] pid 13168 has been hanging in the same request for 3 minutes
Feb 19 13:30:28 sogod [13150]: [ERROR] No child available to handle incoming request!
Feb 19 13:30:28 sogod [13150]: [WARN] pid 13164 has been hanging in the same request for 3 minutes
Feb 19 13:30:29 sogod [13150]: [ERROR] No child available to handle incoming request!
Feb 19 13:30:29 sogod [13150]: [WARN] pid 13163 has been hanging in the same request for 2 minutes
Feb 19 13:30:30 sogod [13168]: Sleeping 15 seconds while detecting changes in Ping...
Feb 19 13:30:30 sogod [13150]: [WARN] pid 13151 has been hanging in the same request for 1 minutes
Feb 19 13:35:03 sogod [13150]: [WARN] pid 13153 has been hanging in the same request for 5 minutes
Feb 19 13:35:04 sogod [13150]: [ERROR] No child available to handle incoming request!
Feb 19 13:35:06 sogod [13150]: [ERROR] No child available to handle incoming request!
Feb 19 13:35:07 sogod [13153]: Sleeping 15 seconds while detecting changes in Ping...
Feb 19 13:35:07 sogod [13150]: [ERROR] No child available to handle incoming request!
Feb 19 13:35:08 sogod [13164]: Sleeping 15 seconds while detecting changes in Ping...
I used gdb to get a trace of one of the hanging processes. The response is this:
#0 0x00007f176ddcc49d in nanosleep () from /lib64/libc.so.6
#1 0x00007f176ddcc334 in sleep () from /lib64/libc.so.6
#2 0x00007f17608e8a99 in -[SOGoActiveSyncDispatcher processPing:inResponse:] () from /usr/lib64/GNUstep/SOGo/ActiveSync.SOGo/./ActiveSync
#3 0x00007f17608eee4b in -[SOGoActiveSyncDispatcher dispatchRequest:inResponse:context:] () from /usr/lib64/GNUstep/SOGo/ActiveSync.SOGo/./ActiveSync
#4 0x00007f1760d50d84 in -[SOGoMicrosoftActiveSyncActions microsoftServerActiveSyncAction] () from /usr/lib64/GNUstep/SOGo/MainUI.SOGo/./MainUI
#5 0x00007f1773e61113 in -[WODirectAction performActionNamed:] () from /lib64/libNGObjWeb.so.4.9
#6 0x00007f1773ee3834 in -[SoActionInvocation callOnObject:withPositionalParametersWhenNotNil:inContext:] () from /lib64/libNGObjWeb.so.4.9
#7 0x00007f1773edee98 in -[SoObjectMethodDispatcher dispatchInContext:] () from /lib64/libNGObjWeb.so.4.9
#8 0x00007f1773ee0f09 in -[SoObjectRequestHandler handleRequest:inContext:session:application:] () from /lib64/libNGObjWeb.so.4.9
#9 0x00007f1773e72753 in -[WORequestHandler handleRequest:] () from /lib64/libNGObjWeb.so.4.9
#10 0x00007f1773e3433c in -[WOCoreApplication dispatchRequest:usingHandler:] () from /lib64/libNGObjWeb.so.4.9
#11 0x00007f1773e3463f in -[WOCoreApplication dispatchRequest:] () from /lib64/libNGObjWeb.so.4.9
#12 0x00007f17751fbb4d in -[SOGo dispatchRequest:] ()
#13 0x00007f1773ed1a85 in -[WOHttpTransaction _run] () from /lib64/libNGObjWeb.so.4.9
#14 0x00007f1773ed1de5 in -[WOHttpTransaction run] () from /lib64/libNGObjWeb.so.4.9
#15 0x00007f1773ecd9e4 in -[WOHttpAdaptor runConnection:] () from /lib64/libNGObjWeb.so.4.9
#16 0x00007f1773ecdc02 in -[WOHttpAdaptor _handleAcceptedConnection:] () from /lib64/libNGObjWeb.so.4.9
#17 0x00007f1773ecdff7 in -[WOHttpAdaptor _handleConnection:] () from /lib64/libNGObjWeb.so.4.9
#18 0x00007f1773ece2c3 in -[WOHttpAdaptor acceptControlMessage:] () from /lib64/libNGObjWeb.so.4.9
#19 0x00007f177261613f in -[NSNotificationCenter _postAndRelease:] () from /lib64/libgnustep-base.so.1.24
#20 0x00007f17732a0e3d in -[NSObject(FileObjectWatcher) receivedEvent:type:extra:forMode:] () from /lib64/libNGExtensions.so.4.9
#21 0x00007f177271ceea in -[GSRunLoopCtxt pollUntil:within:] () from /lib64/libgnustep-base.so.1.24
#22 0x00007f177265d870 in -[NSRunLoop acceptInputForMode:beforeDate:] () from /lib64/libgnustep-base.so.1.24
#23 0x00007f177265dd22 in -[NSRunLoop runMode:beforeDate:] () from /lib64/libgnustep-base.so.1.24
#24 0x00007f1773e33b94 in -[WOCoreApplication run] () from /lib64/libNGObjWeb.so.4.9
#25 0x00007f17751fb1fe in -[SOGo run] ()
#26 0x00007f1773e7bc5e in -[WOWatchDog _runChildWithControlSocket:] () from /lib64/libNGObjWeb.so.4.9
#27 0x00007f1773e7c0f1 in -[WOWatchDog _spawnChild:] () from /lib64/libNGObjWeb.so.4.9
#28 0x00007f1773e7c7d9 in -[WOWatchDog _ensureChildren] () from /lib64/libNGObjWeb.so.4.9
#29 0x00007f1773e7d7f6 in -[WOWatchDog run:argc:argv:] () from /lib64/libNGObjWeb.so.4.9
#30 0x00007f1773e7df21 in WOWatchDogApplicationMain () from /lib64/libNGObjWeb.so.4.9
#31 0x00007f17751fa491 in main ()
Any help please?

vim /etc/sysconfig/sogo
PREFORK=10
USER=sogo
vim /etc/rc.d/init.d/sogod
PREFORK=10

Related

Dynamically receive json data in rust with reqwest

Ive been trying to receive json data with reqwest and serde but I keep getting the error:
Error: reqwest::Error { kind: Decode, source: Error("expected value", line: 1, column: 1) }
This is my code so far:
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let url: String = String::from("https://api.slothpixel.me/api/players/leastrio");
let echo_json: serde_json::Value = reqwest::Client::new()
.get(url)
.send()
.await?
.json()
.await?;
println!("{:#?}", echo_json);
Ok(())
}
reqwest = { version = "0.11", features = ["json"] }
tokio = { version = "1", features = ["full"] }
serde_json = "1"
So I've trie a few things, and it seems you need to add a user agent for it to work. No idea why the documentation doesn't mention it. And I guess reqwest doesn't provied one by default.
reqwest::Client::new()
.get(url)
.header("User-Agent", "Reqwest Rust Test")
.send()
.await?
.json()
.await?;
I used this and it worked!
It should work a quick test with:
tracing = "0.1"
tracing-subscriber = "0.2"
Adding to main:
let subscriber = tracing_subscriber::FmtSubscriber::builder()
.with_max_level(tracing::Level::TRACE)
.finish();
tracing::subscriber::set_global_default(subscriber)
.expect("setting default subscriber failed");
dbg!(reqwest::Client::new().get(&url).send().await?.text().await);
RUST_LOG=trace cargo run
Jul 13 09:45:59.232 TRACE hyper::client::pool: checkout waiting for idle connection: ("https", api.slothpixel.me)
Jul 13 09:45:59.234 TRACE hyper::client::connect::http: Http::connect; scheme=Some("https"), host=Some("api.slothpixel.me"), port=None
Jul 13 09:45:59.234 DEBUG hyper::client::connect::dns: resolving host="api.slothpixel.me"
Jul 13 09:45:59.277 DEBUG hyper::client::connect::http: connecting to [2606:4700:3036::6815:5b3]:443
Jul 13 09:45:59.301 DEBUG hyper::client::connect::http: connected to [2606:4700:3036::6815:5b3]:443
Jul 13 09:45:59.352 TRACE hyper::client::conn: client handshake Http1
Jul 13 09:45:59.353 TRACE hyper::client::client: handshake complete, spawning background dispatcher task
Jul 13 09:45:59.353 TRACE hyper::proto::h1::conn: flushed({role=client}): State { reading: Init, writing: Init, keep_alive: Busy }
Jul 13 09:45:59.353 TRACE hyper::client::pool: checkout dropped for ("https", api.slothpixel.me)
Jul 13 09:45:59.354 TRACE encode_headers: hyper::proto::h1::role: Client::encode method=GET, body=None
Jul 13 09:45:59.355 DEBUG hyper::proto::h1::io: flushed 76 bytes
Jul 13 09:45:59.355 TRACE hyper::proto::h1::conn: flushed({role=client}): State { reading: Init, writing: KeepAlive, keep_alive: Busy }
Jul 13 09:45:59.376 TRACE hyper::proto::h1::conn: Conn::read_head
Jul 13 09:45:59.377 TRACE parse_headers: hyper::proto::h1::role: Response.parse([Header; 100], [u8; 953])
Jul 13 09:45:59.377 TRACE parse_headers: hyper::proto::h1::role: Response.parse Complete(937)
Jul 13 09:45:59.378 DEBUG hyper::proto::h1::io: parsed 14 headers
Jul 13 09:45:59.378 DEBUG hyper::proto::h1::conn: incoming body is content-length (16 bytes)
Jul 13 09:45:59.378 TRACE hyper::proto::h1::decode: decode; state=Length(16)
Jul 13 09:45:59.379 DEBUG hyper::proto::h1::conn: incoming body completed
Jul 13 09:45:59.379 TRACE hyper::proto::h1::conn: maybe_notify; read_from_io blocked
Jul 13 09:45:59.379 TRACE hyper::proto::h1::conn: flushed({role=client}): State { reading: Init, writing: Init, keep_alive: Idle }
Jul 13 09:45:59.379 TRACE hyper::proto::h1::conn: flushed({role=client}): State { reading: Init, writing: Init, keep_alive: Idle }
Jul 13 09:45:59.380 TRACE hyper::client::pool: put; add idle connection for ("https", api.slothpixel.me)
Jul 13 09:45:59.380 DEBUG hyper::client::pool: pooling idle connection for ("https", api.slothpixel.me)
[Jul 13 09:45:59.380 TRACE hyper::proto::h1::conn: flushed({role=client}): State { reading: Init, writing: Init, keep_alive: Idle }
src\main.rs:12] reqwest::Client::new().get(&url).send().await?.text().await = Ok(
"error code: 1020",
)
Jul 13 09:45:59.381 TRACE hyper::proto::h1::dispatch: client tx closed
Jul 13 09:45:59.381 TRACE hyper::client::pool: pool closed, canceling idle interval
Jul 13 09:45:59.382 TRACE hyper::client::pool: checkout waiting for idle connection: ("https", api.slothpixel.me)
Jul 13 09:45:59.382 TRACE hyper::proto::h1::conn: State::close_read()
Jul 13 09:45:59.382 TRACE hyper::client::connect::http: Http::connect; scheme=Some("https"), host=Some("api.slothpixel.me"), port=None
Jul 13 09:45:59.382 TRACE hyper::proto::h1::conn: State::close_write()
Jul 13 09:45:59.382 DEBUG hyper::client::connect::dns: resolving host="api.slothpixel.me"
Jul 13 09:45:59.383 TRACE hyper::proto::h1::conn: flushed({role=client}): State { reading: Closed, writing: Closed, keep_alive: Disabled }
Jul 13 09:45:59.383 DEBUG hyper::client::connect::http: connecting to [2606:4700:3036::6815:5b3]:443
Jul 13 09:45:59.383 TRACE hyper::proto::h1::conn: shut down IO complete
Jul 13 09:45:59.396 DEBUG hyper::client::connect::http: connected to [2606:4700:3036::6815:5b3]:443
Jul 13 09:45:59.428 TRACE hyper::client::conn: client handshake Http1
Jul 13 09:45:59.428 TRACE hyper::client::client: handshake complete, spawning background dispatcher task
Jul 13 09:45:59.429 TRACE hyper::proto::h1::conn: flushed({role=client}): State { reading: Init, writing: Init, keep_alive: Busy }
Jul 13 09:45:59.429 TRACE hyper::client::pool: checkout dropped for ("https", api.slothpixel.me)
Jul 13 09:45:59.430 TRACE encode_headers: hyper::proto::h1::role: Client::encode method=GET, body=None
Jul 13 09:45:59.430 DEBUG hyper::proto::h1::io: flushed 76 bytes
Jul 13 09:45:59.430 TRACE hyper::proto::h1::conn: flushed({role=client}): State { reading: Init, writing: KeepAlive, keep_alive: Busy }
Jul 13 09:45:59.451 TRACE hyper::proto::h1::conn: Conn::read_head
Jul 13 09:45:59.451 TRACE parse_headers: hyper::proto::h1::role: Response.parse([Header; 100], [u8; 953])
Jul 13 09:45:59.452 TRACE parse_headers: hyper::proto::h1::role: Response.parse Complete(937)
Jul 13 09:45:59.452 DEBUG hyper::proto::h1::io: parsed 14 headers
Jul 13 09:45:59.452 DEBUG hyper::proto::h1::conn: incoming body is content-length (16 bytes)
Jul 13 09:45:59.453 TRACE hyper::proto::h1::decode: decode; state=Length(16)
Jul 13 09:45:59.453 DEBUG hyper::proto::h1::conn: incoming body completed
Jul 13 09:45:59.453 TRACE hyper::proto::h1::conn: maybe_notify; read_from_io blocked
Jul 13 09:45:59.453 TRACE hyper::proto::h1::conn: flushed({role=client}): State { reading: Init, writing: Init, keep_alive: Idle }
Jul 13 09:45:59.454 TRACE hyper::proto::h1::conn: flushed({role=client}): State { reading: Init, writing: Init, keep_alive: Idle }
Jul 13 09:45:59.454 TRACE hyper::client::pool: put; add idle connection for ("https", api.slothpixel.me)
Jul 13 09:45:59.454 DEBUG hyper::client::pool: pooling idle connection for ("https", api.slothpixel.me)
Jul 13 09:45:59.454 TRACE hyper::proto::h1::conn: flushed({role=client}): State { reading: Init, writing: Init, keep_alive: Idle }
Jul 13 09:45:59.454 TRACE hyper::client::pool: pool closed, canceling idle interval
Jul 13 09:45:59.454 TRACE hyper::proto::h1::dispatch: client tx closed
Jul 13 09:45:59.455 TRACE hyper::proto::h1::conn: State::close_read()
Jul 13 09:45:59.455 TRACE hyper::proto::h1::conn: State::close_write()
Jul 13 09:45:59.455 TRACE hyper::proto::h1::conn: flushed({role=client}): State { reading: Closed, writing: Closed, keep_alive: Disabled }
Jul 13 09:45:59.456 TRACE hyper::proto::h1::conn: shut down IO complete
Error: reqwest::Error { kind: Decode, source: Error("expected value", line: 1, column: 1) }
error code: 1020 not even a json object contrary to https://api.slothpixel.me/api/players/ that return a json object "error", I suggest a bug report this to https://github.com/slothpixel/core or where it's adequate cause this error is odd.

Parse query stalls main thread for up to a minute

I'm using the Parse package from the asset store to save userdata for my Unity3D mobile game (iOS/Android). Now I've got the problem that the parse queries I make stall my unity mainthread for a long time. I was under the impression that the 'Task' concept of Parse was there to avoid that but somehow it doesn't seem to work. The stalling happens in the Editor as well, only a few seconds compared to up to a minute on mobile. The query.FindAsync().ContinueWith(t => {myContinuationBlock}); method returns immediately and subsequent calls get executed. The Unity3d main thread stalls a few seconds after the query is executed and immediately BEFORE the code in {myContinuationBlock} is executed.
I made a video of the problem. In the video you can also see that only Unity is being stalled, the (native) iAds on the bottom are not affected:
https://www.youtube.com/watch?v=XWaCGk9Hbus (the stall is from 0:07 - 1:07)
Here is my query code (note, this code runs through without problem. The stall happens later, right before the ContinueWith code gets executed):
public static void RetrieveHighScores()
{
DebugLog.Log ("start of RetrieveHighScores() method");
try
{
ParseQuery<ParseUser> query = ParseUser.Query
.WhereGreaterThan (_highScoreKey, 1)
.WhereGreaterThan("updatedAt", DateTime.Now - TimeSpan.FromDays(1))
.OrderByDescending(_score24HoursKey)
.Limit (1000);
DebugLog.Log ("query.FindAsync()");
query.FindAsync().ContinueWith(t =>
{
try
{
if (t.IsFaulted || t.IsCanceled)
{
DebugLog.Log("Retrieving 24 Hours High Scores Failed");
}
else
{
DebugLog.Log("Retrieving 24 Hours High Scores successful! ");
IEnumerable<ParseUser> results = t.Result;
foreach(ParseUser user in results)
{
//doing something with user..
}
DebugLog.Log ("24 Hours High Scores processed. "+_24HourHighScoreList.Count.ToString()+" entries.");
}
}
catch(System.Exception e)
{
DebugLog.Log("Failed to retrieve 24 Hours High Scores. Reason: " + e.Message);
}
});
DebugLog.Log ("FindAsync() returned");
}
catch (System.Exception e)
{
DebugLog.Log("Failed to retrieve 24 Hours High Scores. Reason: " + e.Message);
}
try
{
ParseQuery<ParseUser>query = ParseUser.Query
.WhereGreaterThan (_highScoreKey, 1)
.OrderByDescending(_highScoreKey)
.Limit (1000);
DebugLog.Log ("query.FindAsync()");
query.FindAsync().ContinueWith(t =>
{
DebugLog.Log("Retrieving Alltime High Scores successful! ");
IEnumerable<ParseUser> results = t.Result;
foreach(ParseUser user in results)
{
//doing something with user...
}
DebugLog.Log ("Alltime High Scores processed. "+_allTimeHighScoreList.Count.ToString()+" entries.");
});
DebugLog.Log ("FindAsync() returned");
}
catch (System.Exception e)
{
DebugLog.Log("Failed to retrieve alltime Highscores. Reason: " + e.Message);
}
DebugLog.Log ("end of RetrieveHighScores() method");
}
So, the very next thing I see in the console output after the stall is
"Retrieving Alltime High Scores successful! "
Now, I know that i'm querying 1000 objects here and yes there may be better ways for implementing highscores, but I don't understand why this code is stalling my Unity3D mainthread? Why does it stall sometimes for up to a minute and sometimes it's not noticable at all?
This is a serious problem as my game is released already. It started surfacing only once the user database grew bigger and now I need a quick fix for it.
The stalling does not happen if I don't call the RetrieveHighScores() function so it must be something happening after the parse code received the data from the server and passes it to ContinueWith code.
if I click pause in XCode during the stall, I see the following:
Thread 1, Queue : com.apple.main-thread
#0 0x017eaeb0 in GC_mark_from ()
#1 0x017eb520 in GC_mark_some ()
#2 0x017e5d1c in GC_stopped_mark ()
#3 0x017e6228 in GC_try_to_collect_inner ()
#4 0x017e64f0 in GC_collect_or_expand ()
#5 0x017e6a38 in GC_allocobj ()
#6 0x017e957c in GC_generic_malloc_inner ()
#7 0x017e965c in GC_generic_malloc ()
#8 0x017e9920 in GC_malloc_atomic ()
#9 0x01783814 in mono_array_new_specific ()
#10 0x00c5886c in m_wrapper_managed_to_native_object___icall_wrapper_mono_array_new_specific_intptr_int at /Users/me/myproj/build/device/Libraries/mscorlib.dll.s:187982
#11 0x009ee54c in m_System_Text_RegularExpressions_Interpreter_ResetGroups at /Users/me/myproj/build/device/Libraries/System.dll.s:6462
#12 0x009eb864 in m_System_Text_RegularExpressions_Interpreter_Reset at /Users/me/myproj/build/device/Libraries/System.dll.s:5761
#13 0x009edb6c in m_157 at /Users/me/myproj/build/device/Libraries/System.dll.s:6244
#14 0x009ebc54 in m_155 at /Users/me/myproj/build/device/Libraries/System.dll.s:5813
#15 0x009eb804 in m_System_Text_RegularExpressions_Interpreter_Scan_System_Text_RegularExpressions_Regex_string_int_int at /Users/me/myproj/build/device/Libraries/System.dll.s:5745
#16 0x009f3ce8 in m_System_Text_RegularExpressions_Regex_Match_string_int at /Users/me/myproj/build/device/Libraries/System.dll.s:9106
#17 0x00370204 in m_Parse_Internal_Json_Accept_string_int_System_Text_RegularExpressions_Regex_int__System_Text_RegularExpressions_Match_ at /Users/me/myproj/build/device/Libraries/Parse.Unity.dll.s:3770
#18 0x0036fa80 in m_Parse_Internal_Json_ParseString_string_int_int__object_ at /Users/me/myproj/build/device/Libraries/Parse.Unity.dll.s:3577
#19 0x0036f56c in m_Parse_Internal_Json_ParseMember_string_int_int__object_ at /Users/me/myproj/build/device/Libraries/Parse.Unity.dll.s:3447
#20 0x0036f3b8 in m_Parse_Internal_Json_ParseObject_string_int_int__object_ at /Users/me/myproj/build/device/Libraries/Parse.Unity.dll.s:3413
#21 0x0036f99c in m_Parse_Internal_Json_ParseValue_string_int_int__object_ at /Users/me/myproj/build/device/Libraries/Parse.Unity.dll.s:3553
#22 0x0036f780 in m_Parse_Internal_Json_ParseArray_string_int_int__object_ at /Users/me/myproj/build/device/Libraries/Parse.Unity.dll.s:3500
#23 0x0036f9b8 in m_Parse_Internal_Json_ParseValue_string_int_int__object_ at /Users/me/myproj/build/device/Libraries/Parse.Unity.dll.s:3556
#24 0x0036f5ec in m_Parse_Internal_Json_ParseMember_string_int_int__object_ at /Users/me/myproj/build/device/Libraries/Parse.Unity.dll.s:3457
#25 0x0036f3b8 in m_Parse_Internal_Json_ParseObject_string_int_int__object_ at /Users/me/myproj/build/device/Libraries/Parse.Unity.dll.s:3413
#26 0x0036e920 in m_Parse_Internal_Json_Parse_string at /Users/me/myproj/build/device/Libraries/Parse.Unity.dll.s:3143
#27 0x00383168 in m_Parse_ParseClient_DeserializeJsonString_string at /Users/me/myproj/build/device/Libraries/Parse.Unity.dll.s:13121
#28 0x0038381c in m_Parse_ParseClient__c__DisplayClass8__RequestAsyncb__7_System_Threading_Tasks_Task_1_System_Tuple_2_System_Net_HttpStatusCode_string at /Users/me/myproj/build/device/Libraries/Parse.Unity.dll.s:13363
#29 0x0036de50 in m_Parse_Internal_InternalExtensions__c__DisplayClass1_2__OnSuccessb__0_System_Threading_Tasks_Task at /Users/me/myproj/build/device/Libraries/Parse.Unity.dll.s:2697
#30 0x0036e04c in m_Parse_Internal_InternalExtensions__c__DisplayClass7_1__OnSuccessb__6_System_Threading_Tasks_Task at /Users/me/myproj/build/device/Libraries/Parse.Unity.dll.s:2788
#31 0x003a3a4c in m_System_Threading_Tasks_Task__c__DisplayClass3_1__c__DisplayClass5__ContinueWithb__2 ()
#32 0x003a36b8 in m_System_Threading_Tasks_Task___cctorb__23_System_Action at /Users/me/myproj/build/device/Libraries/Parse.Unity.dll.s:31346
#33 0x003a39bc in m_System_Threading_Tasks_Task__c__DisplayClass3_1__ContinueWithb__1_System_Threading_Tasks_Task at /Users/me/myproj/build/device/Libraries/Parse.Unity.dll.s:31474
#34 0x003a4140 in m_System_Threading_Tasks_Task_1_RunContinuations at /Users/me/myproj/build/device/Libraries/Parse.Unity.dll.s:31766
#35 0x003a4278 in m_System_Threading_Tasks_Task_1_TrySetResult_T at /Users/me/myproj/build/device/Libraries/Parse.Unity.dll.s:31801
#36 0x003a4654 in m_System_Threading_Tasks_TaskCompletionSource_1_TrySetResult_T at /Users/me/myproj/build/device/Libraries/Parse.Unity.dll.s:31950
#37 0x003a4fa0 in m_3d9 at /Users/me/myproj/build/device/Libraries/Parse.Unity.dll.s:32334
#38 0x003a44b0 in m_System_Threading_Tasks_Task_1__c__DisplayClass1__ContinueWithb__0_System_Threading_Tasks_Task at /Users/me/myproj/build/device/Libraries/Parse.Unity.dll.s:31872
#39 0x003a3b3c in m_System_Threading_Tasks_Task__c__DisplayClass8__ContinueWithb__7_System_Threading_Tasks_Task ()
#40 0x00519a84 in m_System_Threading_Tasks_Task__c__DisplayClass3_1__c__DisplayClass5_int__ContinueWithb__2 ()
#41 0x003a36b8 in m_System_Threading_Tasks_Task___cctorb__23_System_Action at /Users/me/myproj/build/device/Libraries/Parse.Unity.dll.s:31346
#42 0x004a9548 in m_1ce9 at /Users/me/myproj/build/device/Libraries/Parse.Unity.dll.s:165091
#43 0x003a8dcc in m_System_Threading_Tasks_Task_ContinueWith_int_System_Func_2_System_Threading_Tasks_Task_int_System_Threading_CancellationToken at /Users/me/myproj/build/device/Libraries/Parse.Unity.dll.s:34526
#44 0x003a321c in m_System_Threading_Tasks_Task_ContinueWith_System_Action_1_System_Threading_Tasks_Task_System_Threading_CancellationToken at /Users/me/myproj/build/device/Libraries/Parse.Unity.dll.s:31155
#45 0x003a3168 in m_System_Threading_Tasks_Task_ContinueWith_System_Action_1_System_Threading_Tasks_Task at /Users/me/myproj/build/device/Libraries/Parse.Unity.dll.s:31129
#46 0x003a3fdc in m_System_Threading_Tasks_Task_1_ContinueWith_System_Action_1_System_Threading_Tasks_Task_1_T at /Users/me/myproj/build/device/Libraries/Parse.Unity.dll.s:31721
#47 0x003a4ed4 in m_3d8 at /Users/me/myproj/build/device/Libraries/Parse.Unity.dll.s:32304
#48 0x003a44b0 in m_System_Threading_Tasks_Task_1__c__DisplayClass1__ContinueWithb__0_System_Threading_Tasks_Task at /Users/me/myproj/build/device/Libraries/Parse.Unity.dll.s:31872
#49 0x003a3b3c in m_System_Threading_Tasks_Task__c__DisplayClass8__ContinueWithb__7_System_Threading_Tasks_Task ()
#50 0x00519a84 in m_System_Threading_Tasks_Task__c__DisplayClass3_1__c__DisplayClass5_int__ContinueWithb__2 ()
#51 0x003a36b8 in m_System_Threading_Tasks_Task___cctorb__23_System_Action at /Users/me/myproj/build/device/Libraries/Parse.Unity.dll.s:31346
#52 0x004a9548 in m_1ce9 at /Users/me/myproj/build/device/Libraries/Parse.Unity.dll.s:165091
#53 0x003a4140 in m_System_Threading_Tasks_Task_1_RunContinuations at /Users/me/myproj/build/device/Libraries/Parse.Unity.dll.s:31766
#54 0x003a4278 in m_System_Threading_Tasks_Task_1_TrySetResult_T at /Users/me/myproj/build/device/Libraries/Parse.Unity.dll.s:31801
#55 0x003a4654 in m_System_Threading_Tasks_TaskCompletionSource_1_TrySetResult_T at /Users/me/myproj/build/device/Libraries/Parse.Unity.dll.s:31950
#56 0x003a3a60 in m_System_Threading_Tasks_Task__c__DisplayClass3_1__c__DisplayClass5__ContinueWithb__2 at /Users/me/myproj/build/device/Libraries/Parse.Unity.dll.s:31498
#57 0x003a36b8 in m_System_Threading_Tasks_Task___cctorb__23_System_Action at /Users/me/myproj/build/device/Libraries/Parse.Unity.dll.s:31346
#58 0x003a39bc in m_System_Threading_Tasks_Task__c__DisplayClass3_1__ContinueWithb__1_System_Threading_Tasks_Task at /Users/me/myproj/build/device/Libraries/Parse.Unity.dll.s:31474
#59 0x003a4140 in m_System_Threading_Tasks_Task_1_RunContinuations at /Users/me/myproj/build/device/Libraries/Parse.Unity.dll.s:31766
#60 0x003a4278 in m_System_Threading_Tasks_Task_1_TrySetResult_T at /Users/me/myproj/build/device/Libraries/Parse.Unity.dll.s:31801
#61 0x003a4654 in m_System_Threading_Tasks_TaskCompletionSource_1_TrySetResult_T at /Users/me/myproj/build/device/Libraries/Parse.Unity.dll.s:31950
#62 0x003a4fa0 in m_3d9 at /Users/me/myproj/build/device/Libraries/Parse.Unity.dll.s:32334
#63 0x003a44b0 in m_System_Threading_Tasks_Task_1__c__DisplayClass1__ContinueWithb__0_System_Threading_Tasks_Task at /Users/me/myproj/build/device/Libraries/Parse.Unity.dll.s:31872
#64 0x003a3b3c in m_System_Threading_Tasks_Task__c__DisplayClass8__ContinueWithb__7_System_Threading_Tasks_Task ()
#65 0x00519a84 in m_System_Threading_Tasks_Task__c__DisplayClass3_1__c__DisplayClass5_int__ContinueWithb__2 ()
#66 0x003a36b8 in m_System_Threading_Tasks_Task___cctorb__23_System_Action at /Users/me/myproj/build/device/Libraries/Parse.Unity.dll.s:31346
#67 0x004a9548 in m_1ce9 at /Users/me/myproj/build/device/Libraries/Parse.Unity.dll.s:165091
#68 0x003a8dcc in m_System_Threading_Tasks_Task_ContinueWith_int_System_Func_2_System_Threading_Tasks_Task_int_System_Threading_CancellationToken at /Users/me/myproj/build/device/Libraries/Parse.Unity.dll.s:34526
#69 0x003a321c in m_System_Threading_Tasks_Task_ContinueWith_System_Action_1_System_Threading_Tasks_Task_System_Threading_CancellationToken at /Users/me/myproj/build/device/Libraries/Parse.Unity.dll.s:31155
#70 0x003a3168 in m_System_Threading_Tasks_Task_ContinueWith_System_Action_1_System_Threading_Tasks_Task at /Users/me/myproj/build/device/Libraries/Parse.Unity.dll.s:31129
#71 0x003a3fdc in m_System_Threading_Tasks_Task_1_ContinueWith_System_Action_1_System_Threading_Tasks_Task_1_T at /Users/me/myproj/build/device/Libraries/Parse.Unity.dll.s:31721
#72 0x003a4ed4 in m_3d8 at /Users/me/myproj/build/device/Libraries/Parse.Unity.dll.s:32304
#73 0x003a44b0 in m_System_Threading_Tasks_Task_1__c__DisplayClass1__ContinueWithb__0_System_Threading_Tasks_Task at /Users/me/myproj/build/device/Libraries/Parse.Unity.dll.s:31872
#74 0x003a3b3c in m_System_Threading_Tasks_Task__c__DisplayClass8__ContinueWithb__7_System_Threading_Tasks_Task ()
#75 0x00519a84 in m_System_Threading_Tasks_Task__c__DisplayClass3_1__c__DisplayClass5_int__ContinueWithb__2 ()
#76 0x003a36b8 in m_System_Threading_Tasks_Task___cctorb__23_System_Action at /Users/me/myproj/build/device/Libraries/Parse.Unity.dll.s:31346
#77 0x004a9548 in m_1ce9 at /Users/me/myproj/build/device/Libraries/Parse.Unity.dll.s:165091
#78 0x003a4140 in m_System_Threading_Tasks_Task_1_RunContinuations at /Users/me/myproj/build/device/Libraries/Parse.Unity.dll.s:31766
#79 0x003a4278 in m_System_Threading_Tasks_Task_1_TrySetResult_T at /Users/me/myproj/build/device/Libraries/Parse.Unity.dll.s:31801
#80 0x003a4654 in m_System_Threading_Tasks_TaskCompletionSource_1_TrySetResult_T at /Users/me/myproj/build/device/Libraries/Parse.Unity.dll.s:31950
#81 0x003a3a60 in m_System_Threading_Tasks_Task__c__DisplayClass3_1__c__DisplayClass5__ContinueWithb__2 at /Users/me/myproj/build/device/Libraries/Parse.Unity.dll.s:31498
#82 0x003a36b8 in m_System_Threading_Tasks_Task___cctorb__23_System_Action at /Users/me/myproj/build/device/Libraries/Parse.Unity.dll.s:31346
#83 0x003a39bc in m_System_Threading_Tasks_Task__c__DisplayClass3_1__ContinueWithb__1_System_Threading_Tasks_Task at /Users/me/myproj/build/device/Libraries/Parse.Unity.dll.s:31474
#84 0x003a4140 in m_System_Threading_Tasks_Task_1_RunContinuations at /Users/me/myproj/build/device/Libraries/Parse.Unity.dll.s:31766
#85 0x003a4278 in m_System_Threading_Tasks_Task_1_TrySetResult_T at /Users/me/myproj/build/device/Libraries/Parse.Unity.dll.s:31801
#86 0x003a4654 in m_System_Threading_Tasks_TaskCompletionSource_1_TrySetResult_T at /Users/me/myproj/build/device/Libraries/Parse.Unity.dll.s:31950
#87 0x003a0848 in m_Parse_PlatformHooks__c__DisplayClass2f__c__DisplayClass35__RequestAsyncb__2a_UnityEngine_WWW at /Users/me/myproj/build/device/Libraries/Parse.Unity.dll.s:29764
#88 0x003a0334 in m_Parse_PlatformHooks__c__DisplayClass20__RegisterNetworkRequestb__1f at /Users/me/myproj/build/device/Libraries/Parse.Unity.dll.s:29572
#89 0x003a0dbc in m_Parse_PlatformHooks__RunDispatcherd__39_MoveNext at /Users/me/myproj/build/device/Libraries/Parse.Unity.dll.s:29915
#90 0x0126ecec in scripting_method_invoke(ScriptingMethod*, MonoObject*, ScriptingArguments&, MonoException**) at /Applications/buildAgent/work/d63dfc6385190b60/Runtime/Scripting/Backend/Mono/ScriptingBackendApi_Mono.cpp:196
#91 0x01309390 in ScriptingInvocation::Invoke(MonoException**, bool) at /Applications/buildAgent/work/d63dfc6385190b60/Runtime/Scripting/Backend/ScriptingInvocation.cpp:128
#92 0x0130935c in ScriptingInvocation::Invoke(MonoException**) at /Applications/buildAgent/work/d63dfc6385190b60/Runtime/Scripting/Backend/ScriptingInvocation.cpp:113
#93 0x01309308 in bool ScriptingInvocation::Invoke<bool>(MonoException**) at /Applications/buildAgent/work/d63dfc6385190b60/Runtime/Scripting/Backend/ScriptingInvocation.cpp:80
#94 0x012df7e4 in Coroutine::InvokeMoveNext(MonoException**) at /Applications/buildAgent/work/d63dfc6385190b60/Runtime/Mono/Coroutine.cpp:196
#95 0x012df57c in Coroutine::Run() at /Applications/buildAgent/work/d63dfc6385190b60/Runtime/Mono/Coroutine.cpp:221
#96 0x012df544 in Coroutine::ContinueCoroutine(Object*, void*) at /Applications/buildAgent/work/d63dfc6385190b60/Runtime/Mono/Coroutine.cpp:78
#97 0x012593c4 in DelayedCallManager::Update(int) at /Applications/buildAgent/work/d63dfc6385190b60/Runtime/GameCode/CallDelayed.cpp:164
#98 0x012d0630 in PlayerLoop(bool, bool, IHookEvent*) at /Applications/buildAgent/work/d63dfc6385190b60/Runtime/Misc/Player.cpp:1880
#99 0x01117878 in UnityPlayerLoop at /Applications/buildAgent/work/d63dfc6385190b60/PlatformDependent/iPhonePlayer/LibEntryPoint.mm:241
#100 0x00d2ff34 in -[UnityAppController(Rendering) repaint] at /Users/me/myproj/build/device/Classes/UnityAppController+Rendering.mm:55
It seems parse's JSON parsing runs on the mainthread, but why?
What's going on here and how can I avoid having the mainthread getting stalled?
//note: occasionally, the stalling does not happen (the query succeeds nevertheless) - as if Parse would sometimes successfully do the parsing on another thread, most of the time however do it on the mainthread.
I'd suggest that if you hook up the Unity profiler to your device, you're going to find that your Linq query is creating immense amounts of garbage - and it's the garbage collection on your device that is stalling your main thread.
Reimplement your leaderboard functionality the hard way and don't use Linq. Linq saves programmers time - not machines, don't use it in your games.

Node.js cluster get master PID

I used following cluster code to fork multiple process for my node app.
if (cluster.isMaster) {
require('os').cpus().forEach(function () {
cluster.fork();
});
cluster.on('exit', function (worker, code, signal) {
cluster.fork();
});
} else if (cluster.isWorker) {
logger.log.info('Worker server started on port %d (ID: %d, PID: %d)', app.get('port'), cluster.worker.id, cluster.worker.process.pid);
}
the output is:
Thu Sep 05 2013 20:30:03 GMT-0700 (PDT) - info: Worker server started on port 3000 (ID: 1, PID: 606)
Thu Sep 05 2013 20:30:03 GMT-0700 (PDT) - info: Worker server started on port 3000 (ID: 2, PID: 607)
Thu Sep 05 2013 20:30:03 GMT-0700 (PDT) - info: Worker server started on port 3000 (ID: 5, PID: 610)
Thu Sep 05 2013 20:30:03 GMT-0700 (PDT) - info: Worker server started on port 3000 (ID: 3, PID: 608)
Thu Sep 05 2013 20:30:03 GMT-0700 (PDT) - info: Worker server started on port 3000 (ID: 4, PID: 609)
Thu Sep 05 2013 20:30:03 GMT-0700 (PDT) - info: Worker server started on port 3000 (ID: 6, PID: 611)
Thu Sep 05 2013 20:30:03 GMT-0700 (PDT) - info: Worker server started on port 3000 (ID: 8, PID: 613)
Thu Sep 05 2013 20:30:03 GMT-0700 (PDT) - info: Worker server started on port 3000 (ID: 7, PID: 612)
There is 8 worker processes but when I checked process using pgrep, I saw 9
$ pgrep -l node
613 node
612 node
611 node
610 node
609 node
608 node
607 node
606 node
605 node
so one process extra must be master process and how do I print out the master process IP?
Thanks
I posted another question related to this one, I think it's might be useful for everyone to look at this as well:
Node.js cluster master process reboot after got kill & pgrep?
You can get the master process pid with process.pid inside if(cluster.isMaster). IP and port are properties of your app so that would be the same.
You can get the master (parent) pid with process.ppid.
This will let you send a signal which is useful for reloads without downtime.
For instance process.kill(process.ppid, 'SIGHUP');

The MongoDB process is shutting down each day. how run mongod forever in the server?

I am beginner in MongoDB and I have a problem with the execution of this in the server.
My project is hosted in servers of hostmonster.com but they don't give me support for MongoDB data bases, although they say that I can install it under my own responsability.
Then, I installed MongoDB 2.4.1 without problems into Linux 64, after, in the MongoDB bin folder (with: mongo, mongod, mongodump ... ) I created a folder called 'data' and 'data/db' for doing some tests.
from console, I connect to the server across the SSH protocol and I run
./mongod --dbpath 'data/db'
and it works.
But, I need that it run automatically forever.
I followed the steps of Mongodb can't start and run the next line:
./mongod --fork --dbpath 'data/db' --smallfiles --logpath 'data/mongodb.log' --logappend
It also worked, It started the process and I closed the console, this process continued running and I could view my data across my domain.
The problem is that the process takes a day to close, ie, I can't see my data across domain, then, I need run mongod again. with:
./mongod --fork --dbpath 'data/db' --smallfiles --logpath 'data/mongodb.log' --logappend
I don't want do it everyday, my question is:
What may be the problem?, why the mongod process dies each day?
how can I run the process forever?
Sorry for my English.
Edit: Add the last error log. I don't understand it.
Fri Apr 12 03:19:34.577 [TTLMonitor] query local.system.indexes query: { expireAfterSeconds: { $exists: true } } ntoreturn:0 ntoskip:0 nscanned:0 keyUpdates:0 locks(micros) r:141663 nreturned:0 reslen:20 141ms
Fri Apr 12 03:19:34.789 [TTLMonitor] query users.system.indexes query: { expireAfterSeconds: { $exists: true } } ntoreturn:0 ntoskip:0 nscanned:3 keyUpdates:0 locks(micros) r:211595 nreturned:0 reslen:20 211ms
Fri Apr 12 03:20:57.869 [PeriodicTask::Runner] task: DBConnectionPool-cleaner took: 18215ms
Fri Apr 12 03:20:57.931 [PeriodicTask::Runner] task: WriteBackManager::cleaner took: 8ms
Fri Apr 12 03:22:14.155 [PeriodicTask::Runner] task: DBConnectionPool-cleaner took: 32ms
Fri Apr 12 03:22:14.215 [PeriodicTask::Runner] task: WriteBackManager::cleaner took: 14ms
Fri Apr 12 03:22:30.670 [TTLMonitor] query actarium.system.indexes query: { expireAfterSeconds: { $exists: true } } ntoreturn:0 ntoskip:0 nscanned:2 keyUpdates:0 locks(micros) r:430204 nreturned:0 reslen:20 430ms
Fri Apr 12 03:23:14.825 [PeriodicTask::Runner] task: DBConnectionPool-cleaner took: 7ms
Fri Apr 12 03:23:31.133 [TTLMonitor] query actarium.system.indexes query: { expireAfterSeconds: { $exists: true } } ntoreturn:0 ntoskip:0 nscanned:2 keyUpdates:0 locks(micros) r:179175 nreturned:0 reslen:20 168ms
Fri Apr 12 03:25:19.201 [PeriodicTask::Runner] task: WriteBackManager::cleaner took: 505ms
Fri Apr 12 03:25:23.370 [TTLMonitor] query local.system.indexes query: { expireAfterSeconds: { $exists: true } } ntoreturn:0 ntoskip:0 nscanned:0 keyUpdates:0 locks(micros) r:3604735 nreturned:0 reslen:20 3604ms
Fri Apr 12 03:25:25.294 [TTLMonitor] query users.system.indexes query: { expireAfterSeconds: { $exists: true } } ntoreturn:0 ntoskip:0 nscanned:3 keyUpdates:0 numYields: 1 locks(micros) r:3479328 nreturned:0 reslen:20 1882ms
Fri Apr 12 03:26:26.647 [TTLMonitor] query actarium.system.indexes query: { expireAfterSeconds: { $exists: true } } ntoreturn:0 ntoskip:0 nscanned:2 keyUpdates:0 numYields: 1 locks(micros) r:1764712 nreturned:0 reslen:20 1044ms
Fri Apr 12 04:09:27.804 [TTLMonitor] query actarium.system.indexes query: { expireAfterSeconds: { $exists: true } } ntoreturn:0 ntoskip:0 nscanned:2 keyUpdates:0 locks(micros) r:200919 nreturned:0 reslen:20 200ms
Fri Apr 12 04:43:54.002 got signal 15 (Terminated), will terminate after current cmd ends
Fri Apr 12 04:43:54.151 [interruptThread] now exiting
Fri Apr 12 04:43:54.151 dbexit:
Fri Apr 12 04:43:54.157 [interruptThread] shutdown: going to close listening sockets...
Fri Apr 12 04:43:54.160 [interruptThread] closing listening socket: 9
Fri Apr 12 04:43:54.160 [interruptThread] closing listening socket: 10
Fri Apr 12 04:43:54.160 [interruptThread] closing listening socket: 11
Fri Apr 12 04:43:54.160 [interruptThread] removing socket file: /tmp/mongodb-27017.sock
Fri Apr 12 04:43:54.160 [interruptThread] shutdown: going to flush diaglog...
Fri Apr 12 04:43:54.160 [interruptThread] shutdown: going to close sockets...
Fri Apr 12 04:43:54.176 [interruptThread] shutdown: waiting for fs preallocator...
Fri Apr 12 04:43:54.176 [interruptThread] shutdown: lock for final commit...
Fri Apr 12 04:43:54.176 [interruptThread] shutdown: final commit...
Fri Apr 12 04:43:54.176 [interruptThread] shutdown: closing all files...
Fri Apr 12 04:43:54.212 [interruptThread] closeAllFiles() finished
Fri Apr 12 04:43:54.220 [interruptThread] journalCleanup...
Fri Apr 12 04:43:54.246 [interruptThread] removeJournalFiles
Fri Apr 12 04:43:54.280 [interruptThread] error removing journal files
boost::filesystem::directory_iterator::construct: No such file or directory: "/home2/anuncio3/bin/mongodb-linux-x86_64-2.4.1/bin/data/db/journal"
Fri Apr 12 04:43:54.280 [interruptThread] error couldn't remove journal file during shutdown boost::filesystem::directory_iterator::construct: No such file or directory: "/home2/anuncio3/bin/mongodb-linux-x86_64-2.4.1/bin/data/db/journal"
Fri Apr 12 04:43:54.285 shutdown failed with exception
Fri Apr 12 04:43:54.285 dbexit: really exiting now
Your answer is here:
Fri Apr 12 04:43:54.002 got signal 15 (Terminated), will terminate after current cmd ends
Fri Apr 12 04:43:54.151 [interruptThread] now exiting
Your process is receiving signal 15, which is the default kill signal. It's possible that their systems are automatically killing long-running processes or something similar. If that is indeed what's happening, then your host would have to resolve that.
Additionally, these errors:
Fri Apr 12 04:43:54.280 [interruptThread] error removing journal files
boost::filesystem::directory_iterator::construct: No such file or directory: "/home2/anuncio3/bin/mongodb-linux-x86_64-2.4.1/bin/data/db/journal"
Fri Apr 12 04:43:54.280 [interruptThread] error couldn't remove journal file during shutdown boost::filesystem::directory_iterator::construct: No such file or directory: "/home2/anuncio3/bin/mongodb-linux-x86_64-2.4.1/bin/data/db/journal"
indicate that something is wrong with your install's data directory. The journal files either don't exist, or are going missing; if some process on the system is trying to clean things up, then it wouldn't surprise me if something is nuking your journal files.
I know this is old question but my experience might be helpful for other reviewers.
Based on my tests, They only let you run a program for 5 minutes (sometimes more than this) before killing it, so it’s fairly useless to install MongoDB unless you have a dedicated IP.

node.js express cluster and high CPU usage

My node.js app uses express, socket.io and talks to mongodb through mongoose. All these are working fine with low cpu usage.
When I made the app run with cluster, it works fine, but the CPU usage really goes very high. Here is what i am doing.
var settings = require("./settings"),
cluster = require('cluster');
cluster('./server')
.use(cluster.logger('logs'))
.use(cluster.stats())
.use(cluster.pidfiles('pids'))
.use(cluster.cli())
.use(cluster.repl(8888))
.listen(7777);
When I check the master.log, I see
[Fri, 21 Oct 2011 02:59:51 GMT] INFO master started
[Fri, 21 Oct 2011 02:59:53 GMT] ERROR worker 0 died
[Fri, 21 Oct 2011 02:59:53 GMT] INFO spawned worker 0
[Fri, 21 Oct 2011 02:59:54 GMT] ERROR worker 0 died
[Fri, 21 Oct 2011 02:59:54 GMT] INFO spawned worker 0
[Fri, 21 Oct 2011 02:59:56 GMT] ERROR worker 0 died
[Fri, 21 Oct 2011 02:59:56 GMT] INFO spawned worker 0
.....
[Fri, 21 Oct 2011 03:11:08 GMT] INFO spawned worker 0
[Fri, 21 Oct 2011 03:11:10 GMT] WARNING shutting down master
[Fri, 21 Oct 2011 03:12:07 GMT] INFO spawned worker 0
[Fri, 21 Oct 2011 03:12:07 GMT] INFO spawned worker 1
[Fri, 21 Oct 2011 03:12:07 GMT] INFO master started
[Fri, 21 Oct 2011 03:12:09 GMT] ERROR worker 1 died
[Fri, 21 Oct 2011 03:12:09 GMT] INFO spawned worker 1
[Fri, 21 Oct 2011 03:12:10 GMT] ERROR worker 1 died
[Fri, 21 Oct 2011 03:12:10 GMT] INFO spawned worker 1
In workers.access.log, I see all console messages, socket.io logs etc...
In workers.error.log, I see the following error messages, looks like something wrong...
node.js:134
throw e; // process.nextTick error, or 'error' event on first tick
^
Error: EADDRINUSE, Address already in use
at HTTPServer._doListen (net.js:1106:5)
at net.js:1077:14
at Object.lookup (dns.js:153:45)
at HTTPServer.listen (net.js:1071:20)
at Object.<anonymous> (/cygdrive/c/HTML5/RENT/test/server/server.js:703:5)
at Module._compile (module.js:402:26)
at Object..js (module.js:408:10)
at Module.load (module.js:334:31)
at Function._load (module.js:293:12)
at require (module.js:346:19)
server.js:703 - points to app.listen(9999);
EDIT: server.js code
var express = require("express"),
fs = require("fs"),
form = require('connect-form'),
app = module.exports = express.createServer(
form({ keepExtensions: true })
),
sys = require("sys"),
RentModel = require("./rent_schema"),
UserModel = require("./track_schema"),
email = require("./email_connect"),
SubscriptionModel = require("./subscription_schema"),
io = require("socket.io"),
fb = require('facebook-js'),
Twitter = require('./Twitter_Analysis'),
Foursquare = require('./Foursquare_Analysis'),
YQL = require("yql"),
settings = require("./settings");
//
var cluster = require('cluster');
cluster(app)
.use(cluster.logger('logs'))
.use(cluster.stats())
.use(cluster.pidfiles('pids'))
.use(cluster.cli())
.use(cluster.debug())
.use(cluster.repl(settings.ADMIN_PORT))
.listen(settings.PORT);
socket = io.listen(app);
.....
.....
//app.listen(settings.PORT);
It looks like you're trying to bind your workers with the same port, that is crashing the workers, but cluster is restarting the workers. So you're in an infinite death cycle.
I'm not sure if you need the app.listen(9999) in your server.js file, which is probably trying to bind port 9999 in all your workers. See the examples in the cluster package for a good example: https://github.com/LearnBoost/cluster/blob/master/examples/express.js

Resources