Nodejs delay/interrupt in for loop - node.js

I want to write a logger (please no comments why and "use ...")
But I am confused with the nodejs (event?) loop/forEach.
As example:
for(var i = 0; i<100; i++){
process.stdout.write(Date.now().toString() + "\n", "utf8");
};
output as: 1466021578453, 1466021578453, 1466021578469, 1466021578469
Questions: Where comes the Delay from 16ms; And how can I prevent that?
EDIT: Windows 7, x64; (Delay on Ubuntu 15, max 2ms)

sudo ltrace -o outlog node myTest.js
This is likely more than you want. The call Date.now() translates into on my machine is clock_gettime. You want to look at the stuff between subsequent calls to clock_gettime. You're also writing out to STDOUT, each time you do that there is overhead. You can run the whole process under ltrace to see what's happening and get a summary with -c.
For me, it runs in 3 ms when not running it under ltrace.
% time seconds usecs/call calls function
------ ----------- ----------- --------- --------------------
28.45 6.629315 209 31690 memcpy
26.69 6.219529 217 28544 memcmp
16.78 3.910686 217 17990 free
9.73 2.266705 214 10590 malloc
2.92 0.679971 220 3083 _Znam
2.86 0.666421 216 3082 _ZdaPv
2.55 0.593798 206 2880 _ZdlPv
2.16 0.502644 211 2378 _Znwm
1.09 0.255114 213 1196 strlen
0.69 0.161741 215 750 pthread_getspecific
0.67 0.155609 209 744 memmove
0.57 0.133857 212 631 _ZNSo6sentryC1ERSo
0.57 0.133344 226 589 pthread_mutex_lock
0.52 0.121342 206 589 pthread_mutex_unlock
0.46 0.106343 207 512 clock_gettime
0.40 0.093022 204 454 memset
0.39 0.089857 216 416 _ZNSt9basic_iosIcSt11char_traitsIcEE4initEPSt15basic_streambufIcS1_E
0.22 0.050741 195 259 strcmp
0.20 0.047454 228 208 _ZNSt8ios_baseC2Ev
0.20 0.047236 227 208 floor
0.19 0.044603 214 208 _ZNSt6localeC1Ev
0.19 0.044536 212 210 _ZNSs4_Rep10_M_destroyERKSaIcE
0.19 0.044200 212 208 _ZNSt8ios_baseD2Ev
I'm not sure why there are 31,690 memcpy's in there and 28544 memcmp. That seems a bit excessive but perhaps that just the JIT start up cost, as for the runtime cost, you can see there are 512 calls to clock_gettime. No idea why there at that many calls either, but you can see 106ms lost in clock_gettime. Good luck with it.

Related

Can't explain this Node clustering behavior

I'm learning about threads and how they interact with Node's native cluster module. I saw some behavior I can't explain that I'd like some help understanding.
My code:
process.env.UV_THREADPOOL_SIZE = 1;
const cluster = require('cluster');
if (cluster.isMaster) {
cluster.fork();
} else {
const crypto = require('crypto');
const express = require('express');
const app = express();
app.get('/', (req, res) => {
crypto.pbkdf2('a', 'b', 100000, 512, 'sha512', () => {
res.send('Hi there');
});
});
app.listen(3000);
}
I benchmarked this code with one request using apache benchmark.
ab -c 1 -n 1 localhost:3000/ yielded these connection times
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.0 0 0
Processing: 605 605 0.0 605 605
Waiting: 605 605 0.0 605 605
Total: 605 605 0.0 605 605
So far so good. I then ran ab -c 2 -n 2 localhost:3000/ (doubling the number of calls from the benchmark). I expected the total time to double since I limited the libuv thread pool to one thread per child process and I only started one child process. But nothing really changed. Here's those results.
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.1 0 0
Processing: 608 610 3.2 612 612
Waiting: 607 610 3.2 612 612
Total: 608 610 3.3 612 612
For extra info, when I further increase the number of calls with ab -c 3 -n 3 localhost:3000/, I start to see a slow down.
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.0 0 0
Processing: 599 814 352.5 922 1221
Waiting: 599 814 352.5 922 1221
Total: 599 815 352.5 922 1221
I'm running all this on a quadcore mac using Node v14.13.1.
tldr: how did my benchmark not use up all my threads? I forked one child process with one thread in its libuv pool - so the one call in my benchmark should have been all it could handle without taking longer. And yet the second test (the one that doubled the amount of calls) took the same amount of time as the benchmark.

Python : Memory consumption accumulating in a while loop

A confession first - a noob programmer here doing occasional scripting. I've been trying to figure the memory consumption for this simple piece of code but unable to figure this out. I have tried searching in the answered questions, but couldn't figure it out. I'm fetching some json data using REST API, and the piece of code below ends up consuming a lot of RAM. I checked the Windows task manager and the memory consumption increases incrementally with each iteration of the loop. I'm overwriting the same variable for each API call, so I think the previous response variable should be overwritten.
while Flag == True:
urlpart= 'data/device/statistics/approutestatsstatistics?scrollId='+varScrollId
response = json.loads(obj1.get_request(urlpart))
lstDataList = lstDataList + response['data']
Flag = response['pageInfo']['hasMoreData']
varScrollId = response['pageInfo']['scrollId']
count += 1
print("Fetched {} records out of {}".format(len(lstDataList), recordCount))
print('Size of List is now {}'.format(str(sys.getsizeof(lstDataList))))
return lstDataList
I tried to profile memory usage using memory_profiler...here's what it shows
92 119.348 MiB 0.000 MiB count = 0
93 806.938 MiB 0.000 MiB while Flag == True:
94 806.938 MiB 0.000 MiB urlpart= 'data/device/statistics/approutestatsstatistics?scrollId='+varScrollId
95 807.559 MiB 30.293 MiB response = json.loads(obj1.get_request(urlpart))
96 806.859 MiB 0.000 MiB print('Size of response within the loop is {}'.format(sys.getsizeof(response)))
97 806.938 MiB 1.070 MiB lstDataList = lstDataList + response['data']
98 806.938 MiB 0.000 MiB Flag = response['pageInfo']['hasMoreData']
99 806.938 MiB 0.000 MiB varScrollId = response['pageInfo']['scrollId']
100 806.938 MiB 0.000 MiB count += 1
101 806.938 MiB 0.000 MiB print("Fetched {} records out of {}".format(len(lstDataList), recordCount))
102 806.938 MiB 0.000 MiB print('Size of List is now {}'.format(str(sys.getsizeof(lstDataList))))
103 return lstDataList
obj1 is an object of Cisco's rest_api_lib class. Link to code here
In fact the program ends up consuming ~1.6 Gigs of RAM. The data I'm fetching has roughly 570K records. The API limits the records to 10K at a time, so the loop runs ~56 times. Line 95 of the code consumes ~30M of RAM as per the memory_profiler output. It's as if each iteration consumes 30M ending u with ~1.6G, so in the same ballpark. Unable to figure out why the memory consumption keeps on accumulating for the loop.
Thanks.
I would suspect it is the line lstDataList = lstDataList + response['data']
This is accumulating response['data'] over time. Also, your indentation seems off, should it be:
while Flag == True:
urlpart= 'data/device/statistics/approutestatsstatistics?scrollId='+varScrollId
response = json.loads(obj1.get_request(urlpart))
lstDataList = lstDataList + response['data']
Flag = response['pageInfo']['hasMoreData']
varScrollId = response['pageInfo']['scrollId']
count += 1
print("Fetched {} records out of {}".format(len(lstDataList), recordCount))
print('Size of List is now {}'.format(str(sys.getsizeof(lstDataList))))
return lstDataList
As far as I can tell, lstDataList will keep growing with each request, leading to the memory increase. Hope that helps, Happy Friday!
it's as if each iteration consumes 30M
That is exactly what is happening. You need to free memory that you dont need for example once you have extracted data from response. You can delete it like so
del response
more on del
more on garbage collection

Permission denied while accessing /proc/<pid>/exe

I am having trouble accessing the file in /proc filesystem
My process once started writes in a log file .My process was stopped and when i checked the logfile to see where it encountered the problem and found "permission denied".
it goes to the /proc directory ,fetches PID via getPID() and fires open() using O_RDONLY to read /proc/<pid>/exe
but after firing i get an error "Permission denied".
I did some research and found that kernel enforces some restrictions while accessing certain files in/proc ,but i have 20 process all accessing the same /proc/<pid>/exe ,but only one facing this problem ..
CHAR fn[100];
159 CHAR args[500];
160 CHAR ProgName[50];
161 CHAR *arr[6];
162 CHAR *buf;
163 CHAR ProcessId[10];
164 static int count_try = 0;
165
166
167 memset(fn,0,100);
168 memset(ProcessId,0,10);
169 sprintf (ProcessId,"%d",Pid);
170 strcpy(fn, "/proc/");
171 strcat(fn, ProcessId);
172 //strcat(fn, "/elf_prpsinfo");
173 strcat(fn, "/exe");
174
175 if ((psp = open(fn, O_RDONLY)) == -1)
176 {
177 perror("GetProgName:ps open::");
178 exit(ERROR);
179 }

How can I give this time-critical procedure priority over other threads?

In order to write a MIDI sequencer I need a steady pulse that calls a timing routine that has absolute priority over anything else in the program and preferrably over anything in the computer. I do this by using TimeSetEvent like this:
TimeSetEvent (FInterval, 0, TimerUpdate, uInt32 (Self), TIME_PERIODIC);
where TimerUpdate is a callback that resumes a separate thread with priority tpTimeCritical and that one calls a routine (FOnTimer) in which to handle all MIDI events.
procedure TThreaded_Timer.Execute;
begin
if Assigned (FOnTimer) then
begin
while not Terminated do
begin
FOnTimer (Self);
if not Terminated then Suspend;
end; // while
end; // if
Terminate;
end; // Execute //
Although this construction is much better than some things I tried before it is still very sensitive to some events. To my surprise it stutters at each display of a hint. Why can a simple hint cause such an interruption of a time critical thread? Of course I can switch it off but which nasty surprises are still waiting for me?
Use the multimedia timer which is designed for this purpose. Delphi timers are awful and really only get attention during idle time. A thread-based timer is only useful when that thread gets attention. The MMTimer operates at kernel level and provides a callback that is really quite accutate. We use it for hardware sequencing automation control it is so good.
Here is my unit that implements an MMTimer as a more easy to use TTimer. Use 'Repeat' to make it single-shot or repetetive.
unit UArtMMTimer;
interface
uses
Classes,
SysUtils,
ExtCtrls,
MMSystem;
type
TArtMMTimer = class( TObject )
constructor Create;
destructor Destroy; override;
PRIVATE
FHandle : MMResult;
FRepeat : boolean;
FIntervalMS : integer;
FOnTimer : TNotifyEvent;
FEnabled : boolean;
procedure RemoveEvent;
procedure InstallEvent;
procedure DoOnCallback;
procedure SetEnabled( AState : boolean );
procedure SetIntervalMS( AValue : integer );
PUBLIC
property Enabled : boolean
read FEnabled
write SetEnabled;
property OnTimer : TNotifyEvent
read FOnTimer
write FOnTimer;
property IntervalMS : integer
read FIntervalMS
write SetIntervalMS;
end;
implementation
uses
Windows;
// TArtMMTimer
// --------------------------------------------------------------------
procedure MMTCallBack(uTimerID, uMessage: UINT;
dwUser, dw1, dw2: DWORD) stdcall;
var
Timer : TArtMMTimer;
begin
Timer := TArtMMTimer( dwUser );
Timer.DoOnCallback;
end;
constructor TArtMMTimer.Create;
begin
Inherited Create;
FIntervalMS := 100;
FRepeat := True;
end;
destructor TArtMMTimer.Destroy;
begin
FOnTimer := nil;
RemoveEvent;
Inherited Destroy;
end;
procedure TArtMMTimer.RemoveEvent;
begin
If FHandle <> 0 then
begin
timeKillEvent( FHandle );
FHandle := 0;
end;
end;
procedure TArtMMTimer.InstallEvent;
var
iFlags : integer;
begin
RemoveEvent;
If FRepeat then
iFlags := TIME_PERIODIC Or TIME_CALLBACK_FUNCTION
else
iFlags := TIME_CALLBACK_FUNCTION;
FHandle := timeSetEvent(
FIntervalMS,
0,
#MMTCallBack,
DWord(Self),
iFlags );
end;
procedure TArtMMTimer.SetEnabled( AState : boolean );
begin
If AState <> FEnabled then
begin
FEnabled := AState;
If FEnabled then
InstallEvent
else
RemoveEvent;
end;
end;
procedure TArtMMTimer.DoOnCallback;
var
NowHRCount, WaitHRCount,IntervalHRCount : THRCount;
begin
If Assigned( FOnTimer ) then
FOnTimer( Self );
end;
procedure TArtMMTimer.SetIntervalMS( AValue : integer );
begin
If AValue <> FIntervalMS then
begin
FIntervalMS := AValue;
If Enabled then
begin
Enabled := False;
Enabled := True;
end;
end;
end;
// End TArtMMTimer
// --------------------------------------------------------------------
end.
The accuracy of the multimedia timers is not that great.Here is an article that explains why.
Instead of relying on a timer to wake up your thread why don't you manage your sleep and wake times all within the thread itself?
Maybe something like this (in pseudo-code, sorry I don't know Delphi):
my_critical_thread()
{
while (true) {
time = get_current_time()
do_work();
time = interval - (get_current_time() - time)
if (time > 0)
sleep(time)
}
}
This should get you very close to your target interval if the thread is set to critical priority, assuming the work you do on each iteration leaves time for your other threads and the rest of the system to do their thing.
Good luck.
Set a timer to a slightly shorter time than required (10 ms less, for example).
When the timer occurs, raise thread priority to "above normal".
Calculate the remaining time to wait and execute Sleep with a slightly shorter interval (1 ms less, for example).
Now start waiting in a loop for the correct time. In each loop occurrence execute at least one asm pause; end; instruction to not push a core to the 100% usage.
When the time occurs, lower thread priority to "normal".
I think that's the best you can do without writing a device driver.
Thanks for all the suggestions. In order to test them I developed a small Delphi program in order to test the suggested algorithms. Four algorithms are tested
Simple (by Brian Frost) - Uses the Multimedia timer (TimeSetEvent), which calls a callback that performs the timed task.
Threaded (by Brian Pedersen) - As with simple, but the callback is called in a separate thread. The thread receives the highest priority possible (TimeCritical).
Looping (by Miguel) - In this algorithm we don't trust the timer at all and write one ourselves. The callback is performed in a loop, after each call we examine how much time is still left until the next tick and wait until the next has to take place. The thread has highest priority. I used the asm pause; end suggestion from Gabr as a fast way of event processing.
Fire & Forget (by myself) - The multimedia timer at each tick creates a separate thread with the highest priority, assigns it the callback and forgets about it. This has the advantage that even when a previous thread has not yet finished, a new one can start already and if you are lucky - on a new processor.
You can find the results here. Most timers run correct with a normal workload, though the simple multimedia timer shows the most variability. The looping timer is the most precise. All timers except the fire & forget run into problems when the workload results in work that lasts longer than the Interval. The best performance comes from the fire & forget algorithm. However, one should take care that no shared resources are used in the callback because the callback can be invoked multiple times when the workload increases. In fact in the current implementation MIDIOut can be invoked simultaneously so it should be surrounded by a critical section.
When another programs is run the timers show a greater variability. The looping timer still performs best.
When I add the timer to my original MIDI sequencer the original question remains als. Hints keep interrupting the timer as before though they did not so in the test version which you can download.
Parameters Constancy of Beat Workload
Timer N Interval Resolution WorkLoad Mean s.d. Min Max Mean s.d. Min Max
Simple 226 22 30 1000 22.001 0.001 21.996 22.009 0.093 0.036 0.079 0.302
Threaded 226 22 30 1000 22.001 0.004 21.964 22.031 0.091 0.032 0.079 0.253
Looping 227 22 30 1000 22.000 0.002 21.999 22.025 0.093 0.034 0.079 0.197
Fire & Forget 226 22 30 1000 22.001 0.008 21.964 22.042 0.091 0.031 0.079 0.186
Simple 226 22 15 1000 22.001 0.002 21.989 22.011 0.091 0.031 0.079 0.224
Threaded 226 22 15 1000 22.001 0.003 21.978 22.031 0.091 0.032 0.079 0.185
Looping 227 22 15 1000 22.000 0.001 21.999 22.015 0.092 0.034 0.079 0.209
Fire & Forget 226 22 15 1000 22.001 0.015 21.861 22.146 0.091 0.031 0.079 0.173
Simple 226 22 0 1000 22.001 0.001 21.997 22.005 0.091 0.030 0.079 0.190
Threaded 226 22 0 1000 22.001 0.003 21.979 22.029 0.091 0.031 0.079 0.182
Looping 227 22 0 1000 22.000 0.000 21.999 22.002 0.092 0.034 0.079 0.194
Fire & Forget 226 22 0 1000 22.001 0.026 21.747 22.256 0.090 0.030 0.079 0.180
Simple 226 22 30 10000 22.001 0.002 21.992 22.012 0.801 0.034 0.787 1.001
Threaded 226 22 30 10000 22.001 0.002 21.994 22.008 0.800 0.031 0.787 0.898
Looping 227 22 30 10000 22.000 0.000 21.999 22.000 0.802 0.034 0.787 0.919
Fire & Forget 226 22 30 10000 22.001 0.010 21.952 22.087 0.903 0.230 0.788 1.551
Simple 226 22 15 10000 22.001 0.002 21.984 22.020 0.810 0.081 0.788 1.417
Threaded 226 22 15 10000 22.001 0.006 21.981 22.073 0.800 0.031 0.788 0.889
Looping 227 22 15 10000 22.000 0.000 21.999 22.000 0.802 0.036 0.787 0.969
Fire & Forget 226 22 15 10000 22.001 0.009 21.914 22.055 0.799 0.030 0.788 0.885
Simple 226 22 0 10000 22.001 0.002 21.994 22.006 0.799 0.030 0.788 0.894
Threaded 226 22 0 10000 22.001 0.005 21.953 22.048 0.799 0.030 0.787 0.890
Looping 227 22 0 10000 22.000 0.000 21.999 22.002 0.801 0.034 0.787 0.954
Fire & Forget 226 22 0 10000 22.001 0.007 21.977 22.029 0.799 0.030 0.788 0.891
Simple 226 22 30 100000 22.001 0.002 21.988 22.017 7.900 0.052 7.879 8.289
Threaded 226 22 30 100000 22.001 0.003 21.967 22.035 7.897 0.036 7.879 8.185
Looping 227 22 30 100000 22.000 0.001 21.999 22.015 7.908 0.098 7.879 9.165
Fire & Forget 225 22 30 100000 22.001 0.007 21.960 22.027 7.901 0.038 7.880 8.061
Simple 227 22 15 100000 22.014 0.195 21.996 24.934 7.902 0.056 7.879 8.351
Threaded 226 22 15 100000 22.001 0.002 21.997 22.008 7.900 0.049 7.879 8.362
Looping 227 22 15 100000 22.000 0.000 22.000 22.000 7.900 0.046 7.879 8.229
Fire & Forget 225 22 15 100000 22.001 0.008 21.962 22.065 7.906 0.082 7.880 8.891
Simple 227 22 0 100000 22.018 0.261 21.937 25.936 7.901 0.050 7.879 8.239
Threaded 226 22 0 100000 22.001 0.001 21.998 22.005 7.897 0.031 7.879 7.987
Looping 227 22 0 100000 22.000 0.000 21.999 22.000 7.901 0.053 7.879 8.263
Fire & Forget 225 22 0 100000 22.001 0.007 21.967 22.032 7.900 0.044 7.880 8.308
Simple 63 22 30 1000000 78.027 6.801 24.938 80.730 77.754 8.947 7.890 80.726
Threaded 56 22 30 1000000 87.908 1.334 78.832 91.787 78.897 0.219 78.819 80.430
Looping 62 22 30 1000000 78.923 0.320 78.808 80.749 78.923 0.320 78.808 80.748
Fire & Forget 222 22 30 1000000 22.001 0.009 21.956 22.038 84.212 3.431 78.825 91.812
Simple 66 22 15 1000000 75.656 13.090 21.994 80.714 79.183 1.559 78.811 90.950
Threaded 56 22 15 1000000 87.841 1.204 78.991 88.011 78.849 0.043 78.812 79.003
Looping 62 22 15 1000000 78.880 0.207 78.807 80.442 78.880 0.207 78.807 80.441
Fire & Forget 222 22 15 1000000 22.001 0.978 11.975 32.042 84.915 3.569 78.816 90.917
Simple 66 22 0 1000000 75.681 12.992 21.991 80.778 79.213 1.400 78.807 87.766
Threaded 56 22 0 1000000 87.868 1.238 78.889 89.515 78.954 0.597 78.813 83.164
Looping 62 22 0 1000000 78.942 0.307 78.806 80.380 78.942 0.307 78.806 80.379
Fire & Forget 222 22 0 1000000 22.001 0.011 21.926 22.076 83.953 3.103 78.821 91.145

MonoTouch instability continues: managed memory allocator crashes

Long story short: I can allocate tons of unmanaged memory, but trying to allocate the same amount (or far less) in managed memory crashes MonoTouch in GC_remap (callstack below).
Details:
I'll talk about one example of the behavior I described above. My app allocates a 2.5MB chunk of managed memory (using new byte[]) occasionally, and it often dies on my iPhone4 with the callstack pasted below (i.e. mprotect error during the allocation). I don't keep a reference to these 2.5MB blocks for longer than a single function call.
The MonoTouch guys say that 'mprotect errno 12' means you've exhausted memory on your device, but the thing is, I have lots of memory available to my app. I can allocate 0MB, 10MB, or 200MB of unmanaged memory (using Marshal.AllocHGlobal) at my app's startup, touch it every frame, and it makes zero difference in the behavior of my app or in the frequency of this mprotect error.
Some notes
GC.TotalMemory tells me that my app is sitting between 3MB and 5MB of managed memory usage all the time.
I have other places in my app where I'm allocating even-larger blocks of unmanaged memory, and it never crashes there. I have created stress tests that load 4MB of (unmanaged) texture data, hand it to GL, and draw it every frame and the app is rock solid until I start asking for large blocks of managed memory.
GC.CollectionCount barely ever changes unless I call GC.Collect myself.
The same behavior happens with MonoTouch 3.2.3 as well as MonoTouch 4.0.
The same behavior happens across all of our testing devices (iPhone 3G, 3GS, 4, iPad, iPad2).
The same behavior happens in release builds and debug builds, although it happens more frequently with debug builds.
Ways to provoke the crash
If I create a thread that loops around calling GC.Collect, then sleeping for 1ms, this makes the crash happen much sooner (i.e. practically immediately if I'm in a debug build).
Using certain .NET functionality like WebRequest will cause this crash as well. I can only assume that it's allocating big blocks of managed memory somewhere in there as well.
Ways around the crash
There are two ways to reduce the frequency of the crash or to fix it altogether:
If I PRE-allocate that 2.5MB chunk of managed memory and then just keep it around for the lifetime of the app, then there's no crash.
If I pin the 2.5MB chunk of memory prior to doing anything with it, that seems to help.
Conclusions / Questions
We have yet to achieve full stability in our app due to this issue. This crash (always inside GC_remap) happens in random allocations throughout our app (the 2.5MB example I have here is just the one that I chose to isolate and repro).
Questions:
Can I not trust the managed allocator at all?
Why is it that I can allocate 200MB of unmanaged memory, but the managed allocator dies when I'm asking for 2.5MB? (Note: It'll die when I ask for 2.5MB even when I haven't allocated the 200MB of unmanaged memory).
Why is it that the app is totally fine if I hog that 2.5MB for the lifetime of the app, but if I give it back to the system (and call GC.Collect) and ask for another 2.5MB later, the crashyness is worse! If this really is a low memory condition, shouldn't it be better to give 2.5MB back to the system than to hog it?
Can we even use MonoTouch?
My team is seriously considering abandoning MonoTouch for our product because we can't get it to be reliably stable.
We also can't get the time of day from the MonoTouch team either on stackoverflow, by filing bugs on Novell's site, or by emailing MonoTouch's support email directly. We have reduced our (managed and unmanaged) memory usage to ridiculous lows, but the app is still crashy due to this issue.
In the short term, the only workaround I've got in mind is to allocate a big chunk of memory (2-5MB) at startup, PIN it so the garbage collector never touches it, and write my own allocator to dole out parts of this memory block to my app as needed. But if this is the best solution that is possible under MonoTouch, then I'm going to want my money back as soon as I can achieve escape velocity from MonoTouch.
...
Mprotect failed at 0xaa00000 (length 3801088) with errno 12
Stacktrace:
at MyApp.GameScreen/VerifyPictureDialog.StoreBasePictureData () [0x00000] in /Users/dussault/s/MyApp/Main/Src/PhotoScreens.cs:428
at MyApp.GameScreen/VerifyPictureDialog.ApplyFilters (bool) [0x0004b] in /Users/dussault/s/MyApp/Main/Src/PhotoScreens.cs:640
at MyApp.GameScreen/VerifyPictureDialog.Simulate (single) [0x00077] in /Users/dussault/s/MyApp/Main/Src/PhotoScreens.cs:477
at MyApp.BaseWindow.Simulate (single) [0x00007] in /Users/dussault/s/MyApp/Main/Src/BaseWindow.cs:56
at MyApp.BaseWindow.Simulate (single) [0x00007] in /Users/dussault/s/MyApp/Main/Src/BaseWindow.cs:56
at MyApp.GameScreen.Simulate (single) [0x00238] in /Users/dussault/s/MyApp/Main/Src/GameScreen.cs:3114
at MyApp.BaseWindow.Simulate (single) [0x00007] in /Users/dussault/s/MyApp/Main/Src/BaseWindow.cs:56
at MyApp.WindowMgr.Simulate (single) [0x0002f] in /Users/dussault/s/MyApp/Main/Src/WindowMgr.cs:126
at MyApp.Game1.Update (Microsoft.Xna.Framework.GameTime) [0x0010f] in /Users/dussault/s/MyApp/Main/Src/Game1.cs:1194
at Microsoft.Xna.Framework.Game.DispatchUpdate (Microsoft.Xna.Framework.GameTime) [0x00000] in /Users/dussault/s/MyApp/Main/Src/XNA-Emulation/GraphicsDevice.cs:531
at MyApp_iOS.EAGLView.OnUpdateFrame () [0x00050] in /Users/dussault/s/MyApp/Main/Src/iOS/EAGLView.cs:310
at MyApp_iOS.EAGLView.SimulateAndRender () [0x0000a] in /Users/dussault/s/MyApp/Main/Src/iOS/EAGLView.cs:279
at MyApp_iOS.EAGLView.MainLoopTimerCallback () [0x00006] in /Users/dussault/s/MyApp/Main/Src/iOS/EAGLView.cs:231
at MonoTouch.Foundation.NSActionDispatcher.Apply () <0x0002b>
at (wrapper runtime-invoke) object.runtime_invoke_dynamic (intptr,intptr,intptr,intptr) <0xffffffff>
at MonoTouch.UIKit.UIApplication.Main (string[],string,string) <0x000cf>
at MonoTouch.UIKit.UIApplication.Main (string[]) <0x00023>
at MyApp_iOS.Application.Main (string[]) [0x00000] in /Users/dussault/s/MyApp/Main/Src/iOS/Main.cs:57
at (wrapper runtime-invoke) object.runtime_invoke_dynamic (intptr,intptr,intptr,intptr) <0xffffffff>
Native stacktrace:
0 MyApp_iOS 0x00af1b48 mono_handle_native_sigsegv + 412
1 MyApp_iOS 0x00b1c66c sigabrt_signal_handler + 148
2 libSystem.B.dylib 0x33bd3ddf _sigtramp + 42
3 libSystem.B.dylib 0x33bd52cb kill + 10
4 libSystem.B.dylib 0x33bd52bd raise + 16
5 libSystem.B.dylib 0x33be9d79 abort + 56
6 MyApp_iOS 0x00c74378 GC_remap + 200
7 MyApp_iOS 0x00c62c04 GC_allochblk_nth + 1536
8 MyApp_iOS 0x00c625b4 GC_allochblk + 96
9 MyApp_iOS 0x00c6bf6c GC_alloc_large + 132
10 MyApp_iOS 0x00c6c5e8 GC_generic_malloc + 324
11 MyApp_iOS 0x00c6c8c8 GC_malloc_atomic + 332
12 MyApp_iOS 0x00bd8e88 mono_object_allocate_ptrfree + 64
13 MyApp_iOS 0x00bd8ff4 mono_array_new_specific + 148
14 MyApp_iOS 0x009173f4 wrapper_managed_to_native_object___icall_wrapper_mono_array_new_specific_intptr_int + 68
15 MyApp_iOS 0x002cd880 MyApp_GameScreen_VerifyPictureDialog_ApplyFilters_bool + 628
16 MyApp_iOS 0x002cbffc MyApp_GameScreen_VerifyPictureDialog_Simulate_single + 768
17 MyApp_iOS 0x002ef9d0 MyApp_BaseWindow_Simulate_single + 280
18 MyApp_iOS 0x002ef9d0 MyApp_BaseWindow_Simulate_single + 280
19 MyApp_iOS 0x002a71fc MyApp_GameScreen_Simulate_single + 2736
20 MyApp_iOS 0x002ef9d0 MyApp_BaseWindow_Simulate_single + 280
21 MyApp_iOS 0x0038068c MyApp_WindowMgr_Simulate_single + 376
22 MyApp_iOS 0x0027f798 MyApp_Game1_Update_Microsoft_Xna_Framework_GameTime + 1992
23 MyApp_iOS 0x0039afc8 Microsoft_Xna_Framework_Game_DispatchUpdate_Microsoft_Xna_Framework_GameTime + 148
24 MyApp_iOS 0x0026ec10 MyApp_iOS_EAGLView_OnUpdateFrame + 716
25 MyApp_iOS 0x0026e8cc MyApp_iOS_EAGLView_SimulateAndRender + 196
26 MyApp_iOS 0x0026e1cc MyApp_iOS_EAGLView_MainLoopTimerCallback + 296
27 MyApp_iOS 0x009a7dfc MonoTouch_Foundation_NSActionDispatcher_Apply + 44
28 MyApp_iOS 0x00912540 wrapper_runtime_invoke_object_runtime_invoke_dynamic_intptr_intptr_intptr_intptr + 200
29 MyApp_iOS 0x00acc9c4 mono_jit_runtime_invoke + 2800
30 MyApp_iOS 0x00bd3ea4 mono_runtime_invoke + 140
31 MyApp_iOS 0x00c7d214 monotouch_trampoline + 2840
32 Foundation 0x3363b469 __NSFireTimer + 136
33 CoreFoundation 0x33a770a3 __CFRUNLOOP_IS_CALLING_OUT_TO_A_TIMER_CALLBACK_FUNCTION__ + 14
34 CoreFoundation 0x33a76b5b __CFRunLoopDoTimer + 850
35 CoreFoundation 0x33a481b5 __CFRunLoopRun + 1088
36 CoreFoundation 0x33a47c87 CFRunLoopRunSpecific + 230
37 CoreFoundation 0x33a47b8f CFRunLoopRunInMode + 58
38 GraphicsServices 0x33b0e4ab GSEventRunModal + 114
39 GraphicsServices 0x33b0e557 GSEventRun + 62
40 UIKit 0x32099329 -[UIApplication _run] + 412
41 UIKit 0x32096e93 UIApplicationMain + 670
42 MyApp_iOS 0x009d484c wrapper_managed_to_native_MonoTouch_UIKit_UIApplication_UIApplicationMain_int_string___intptr_intptr + 240
43 MyApp_iOS 0x009b4c00 MonoTouch_UIKit_UIApplication_Main_string__ + 36
44 MyApp_iOS 0x00269694 MyApp_iOS_Application_Main_string__ + 128
45 MyApp_iOS 0x00912540 wrapper_runtime_invoke_object_runtime_invoke_dynamic_intptr_intptr_intptr_intptr + 200
46 MyApp_iOS 0x00acc9c4 mono_jit_runtime_invoke + 2800
47 MyApp_iOS 0x00bd3ea4 mono_runtime_invoke + 140
48 MyApp_iOS 0x00bd6f3c mono_runtime_exec_main + 784
49 MyApp_iOS 0x00bd5f6c mono_runtime_run_main + 1048
50 MyApp_iOS 0x00ad7940 mono_jit_exec + 216
51 MyApp_iOS 0x00ac2e38 main + 3536
52 MyApp_iOS 0x000133a0 start + 52
Debug info from gdb:
=================================================================
Got a SIGABRT while executing native code. This usually indicates
a fatal error in the mono runtime or one of the native libraries
used by your application.
=================================================================
The Darwin kernel overcommits memory.
What this means is that if you request 200 megs of ram, you will get them, even if they are not available and as long as you do not actually consume the memory, your application will run fine.
Only when you actually write to the page will the page be allocated to your process.
A proper test would require you to allocate the memory and then fill up the used memory, this is why you probably get the impression that you can allocate 200 megs of ram even when you might not even have it.
A simple program will show you this: try allocating 500 megs, the OS will say "Yes, you got it", but there are no iPhones with this kind of memory.
A sample test case would go a long way to showing what the problem is.

Resources