Serialize to JSON string tracemalloc in Python 3.7 [closed] - python-3.x

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 3 years ago.
Improve this question
I have to serialize to JSON string the result of tracemalloc.
current_mem, peak_mem = tracemalloc.get_traced_memory()
overhead = tracemalloc.get_tracemalloc_memory()
stats = tracemalloc.take_snapshot().statistics('traceback')[:top]
summary = "traced memory: %d KiB peak: %d KiB overhead: %d KiB" % (
int(current_mem // 1024), int(peak_mem // 1024), int(overhead // 1024)
)
logging.info("%s", summary)
out_lines = [ summary ]
for trace in stats:
out_lines.append("---")
out_lines.append( "%d KiB in %d blocks" % (int(trace.size // 1024), int(trace.count)) )
logging.info("%s", out_lines)
out_lines.extend( trace.traceback.format() )
out_lines.append('')
data = {}
data['traceback'] = '\n'.join(out_lines).encode('utf-8')
res = json.dumps(data)
print(res)
When I dump data to JSON I get a
Object of type bytes is not JSON serializable
From logging I can see the string output:
2020-01-08 11:54:25 - INFO - traced memory: 35 KiB peak: 91 KiB overhead: 31 KiB
2020-01-08 11:54:25 - INFO - ['traced memory: 35 KiB peak: 91 KiB overhead: 31 KiB', '---', '1 KiB in 4 blocks']
and then in the loop:
2020-01-08 11:54:26 - ERROR - ['traced memory: 35 KiB peak: 91 KiB overhead: 31 KiB', '---', '1 KiB in 4 blocks', ' File "/usr/local/lib/python3.7/site-packages/tornado/routing.py", line 256', ' self.delegate.finish()', ' File "/usr/local/lib/python3.7/site-packages/tornado/web.py", line 2195', ' self.execute()', ' File "/usr/local/lib/python3.7/site-packages/tornado/web.py", line 2228', ' **self.path_kwargs)', ' File "/usr/local/lib/python3.7/site-packages/tornado/gen.py", line 326', ' yielded = next(result)', ' File "/usr/local/lib/python3.7/site-packages/tornado/web.py", line 1590', ' result = method(*self.path_args, **self.path_kwargs)', ' File "/tornado/handlers/memTraceHandler.py", line 56', ' self.write(json.dumps(response.getData()))', '---', '0 KiB in 2 blocks']
So which is the b"" string I cannot serialize?

YOU are creating the bytes object here:
data['traceback'] = '\n'.join(out_lines).encode('utf-8')
That's what calling encode does.
Simply do:
data['traceback'] = '\n'.join(out_lines)
And it will dump out fine.

Related

Python : Memory consumption accumulating in a while loop

A confession first - a noob programmer here doing occasional scripting. I've been trying to figure the memory consumption for this simple piece of code but unable to figure this out. I have tried searching in the answered questions, but couldn't figure it out. I'm fetching some json data using REST API, and the piece of code below ends up consuming a lot of RAM. I checked the Windows task manager and the memory consumption increases incrementally with each iteration of the loop. I'm overwriting the same variable for each API call, so I think the previous response variable should be overwritten.
while Flag == True:
urlpart= 'data/device/statistics/approutestatsstatistics?scrollId='+varScrollId
response = json.loads(obj1.get_request(urlpart))
lstDataList = lstDataList + response['data']
Flag = response['pageInfo']['hasMoreData']
varScrollId = response['pageInfo']['scrollId']
count += 1
print("Fetched {} records out of {}".format(len(lstDataList), recordCount))
print('Size of List is now {}'.format(str(sys.getsizeof(lstDataList))))
return lstDataList
I tried to profile memory usage using memory_profiler...here's what it shows
92 119.348 MiB 0.000 MiB count = 0
93 806.938 MiB 0.000 MiB while Flag == True:
94 806.938 MiB 0.000 MiB urlpart= 'data/device/statistics/approutestatsstatistics?scrollId='+varScrollId
95 807.559 MiB 30.293 MiB response = json.loads(obj1.get_request(urlpart))
96 806.859 MiB 0.000 MiB print('Size of response within the loop is {}'.format(sys.getsizeof(response)))
97 806.938 MiB 1.070 MiB lstDataList = lstDataList + response['data']
98 806.938 MiB 0.000 MiB Flag = response['pageInfo']['hasMoreData']
99 806.938 MiB 0.000 MiB varScrollId = response['pageInfo']['scrollId']
100 806.938 MiB 0.000 MiB count += 1
101 806.938 MiB 0.000 MiB print("Fetched {} records out of {}".format(len(lstDataList), recordCount))
102 806.938 MiB 0.000 MiB print('Size of List is now {}'.format(str(sys.getsizeof(lstDataList))))
103 return lstDataList
obj1 is an object of Cisco's rest_api_lib class. Link to code here
In fact the program ends up consuming ~1.6 Gigs of RAM. The data I'm fetching has roughly 570K records. The API limits the records to 10K at a time, so the loop runs ~56 times. Line 95 of the code consumes ~30M of RAM as per the memory_profiler output. It's as if each iteration consumes 30M ending u with ~1.6G, so in the same ballpark. Unable to figure out why the memory consumption keeps on accumulating for the loop.
Thanks.
I would suspect it is the line lstDataList = lstDataList + response['data']
This is accumulating response['data'] over time. Also, your indentation seems off, should it be:
while Flag == True:
urlpart= 'data/device/statistics/approutestatsstatistics?scrollId='+varScrollId
response = json.loads(obj1.get_request(urlpart))
lstDataList = lstDataList + response['data']
Flag = response['pageInfo']['hasMoreData']
varScrollId = response['pageInfo']['scrollId']
count += 1
print("Fetched {} records out of {}".format(len(lstDataList), recordCount))
print('Size of List is now {}'.format(str(sys.getsizeof(lstDataList))))
return lstDataList
As far as I can tell, lstDataList will keep growing with each request, leading to the memory increase. Hope that helps, Happy Friday!
it's as if each iteration consumes 30M
That is exactly what is happening. You need to free memory that you dont need for example once you have extracted data from response. You can delete it like so
del response
more on del
more on garbage collection

How can I reduce the virtual memory required by gccgo compiled executable?

When I compile this simple hello world example using gccgo, the resulting executable uses over 800 MiB of VmData. I would like to know why, and if there is anything I can do to lower that. The sleep is just to give me time to observe the memory usage.
The source:
package main
import (
"fmt"
"time"
)
func main() {
fmt.Println("hello world")
time.Sleep(1000000000 * 5)
}
The script I use to compile:
#!/bin/bash
TOOLCHAIN_PREFIX=i686-linux-gnu
OPTIMIZATION_FLAG="-O3"
CGO_ENABLED=1 \
CC=${TOOLCHAIN_PREFIX}-gcc-8 \
CXX=${TOOLCHAIN_PREFIX}-g++-8 \
AR=${TOOLCHAIN_PREFIX}-ar \
GCCGO=${TOOLCHAIN_PREFIX}-gccgo-8 \
CGO_CFLAGS="-g ${OPTIMIZATION_FLAG}" \
CGO_CPPFLAGS="" \
CGO_CXXFLAGS="-g ${OPTIMIZATION_FLAG}" \
CGO_FFLAGS="-g ${OPTIMIZATION_FLAG}" \
CGO_LDFLAGS="-g ${OPTIMIZATION_FLAG}" \
GOOS=linux \
GOARCH=386 \
go build -x \
-compiler=gccgo \
-gccgoflags=all="-static -g ${OPTIMIZATION_FLAG}" \
$1
The version of gccgo:
$ i686-linux-gnu-gccgo-8 --version
i686-linux-gnu-gccgo-8 (Ubuntu 8.2.0-1ubuntu2~18.04) 8.2.0
Copyright (C) 2018 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
The output from /proc/<pid>/status:
VmPeak: 811692 kB
VmSize: 811692 kB
VmLck: 0 kB
VmPin: 0 kB
VmHWM: 5796 kB
VmRSS: 5796 kB
VmData: 807196 kB
VmStk: 132 kB
VmExe: 2936 kB
VmLib: 0 kB
VmPTE: 52 kB
VmPMD: 0 kB
VmSwap: 0 kB
I ask because my device only has 512 MiB of RAM. I know that this is virtual memory but I would like to reduce or remove the overcommit if possible. It does not seem reasonable to me for a simple executable to require that much allocation.
I was able to locate where gccgo is asking for so much memory. It's in the libgo/go/runtime/malloc.go file in the mallocinit function:
// If we fail to allocate, try again with a smaller arena.
// This is necessary on Android L where we share a process
// with ART, which reserves virtual memory aggressively.
// In the worst case, fall back to a 0-sized initial arena,
// in the hope that subsequent reservations will succeed.
arenaSizes := [...]uintptr{
512 << 20,
256 << 20,
128 << 20,
0,
}
for _, arenaSize := range &arenaSizes {
// SysReserve treats the address we ask for, end, as a hint,
// not as an absolute requirement. If we ask for the end
// of the data segment but the operating system requires
// a little more space before we can start allocating, it will
// give out a slightly higher pointer. Except QEMU, which
// is buggy, as usual: it won't adjust the pointer upward.
// So adjust it upward a little bit ourselves: 1/4 MB to get
// away from the running binary image and then round up
// to a MB boundary.
p = round(getEnd()+(1<<18), 1<<20)
pSize = bitmapSize + spansSize + arenaSize + _PageSize
if p <= procBrk && procBrk < p+pSize {
// Move the start above the brk,
// leaving some room for future brk
// expansion.
p = round(procBrk+(1<<20), 1<<20)
}
p = uintptr(sysReserve(unsafe.Pointer(p), pSize, &reserved))
if p != 0 {
break
}
}
if p == 0 {
throw("runtime: cannot reserve arena virtual address space")
}
The interesting part is that it falls back to smaller arena sizes if larger ones fail. So limiting the virtual memory available to a go executable will actually limit how much it will successfully allocate.
I was able to use ulimit -v 327680 to limit the virtual memory to smaller numbers:
VmPeak: 300772 kB
VmSize: 300772 kB
VmLck: 0 kB
VmPin: 0 kB
VmHWM: 5712 kB
VmRSS: 5712 kB
VmData: 296276 kB
VmStk: 132 kB
VmExe: 2936 kB
VmLib: 0 kB
VmPTE: 56 kB
VmPMD: 0 kB
VmSwap: 0 kB
These are still big numbers, but the best that a gccgo executable can achieve. So the answer to the question is, yes you can reduce the VmData of a gccgo compiled executable, but you really shouldn't worry about it. (On a 64 bit machine gccgo tries to allocate 512 GB.)
The likely cause is that you are linking libraries into the code. My guess is that you'd be able to get a smaller logical address space if you were to explicitly link to static libraries so that you get the minimum added to your executable. In any event, there is minimum harm in having a large logical address space.

How to truncate/remove contents from a file untill a particular size reaches

I have a text file sizes around 10 KB, I need to removes lines one-by-one from the beginning of the file and needs to stop when the file reaches 5 KB . I used the following piece of code, but it is not giving me the accurate results what i want to see ( For ex : If i want to reduce it to 5 KB it stops when it reaches the 6.5 KB)
cacheCutOffSize = 5 * 1024; //(in KB)
using (StreamWriter sr = new StreamWriter(fileName))
{
sr.AutoFlush = true;
while ((sr.BaseStream.Length < cacheCutOffSize) && lines.Count > 0) //*8
{
sr.WriteLine(strng);
lines.RemoveAt(0);
strng = lines[0];
}
}
Is there a better and accurate way for doing this?

Golang : fatal error: runtime: out of memory

I trying to use this package in Github for string matching. My dictionary is 4 MB. When creating the Trie, I got fatal error: runtime: out of memory. I am using Ubuntu 14.04 with 8 GB of RAM and Golang version 1.4.2.
It seems the error come from the line 99 (now) here : m.trie = make([]node, max)
The program stops at this line.
This is the error:
fatal error: runtime: out of memory
runtime stack:
runtime.SysMap(0xc209cd0000, 0x3b1bc0000, 0x570a00, 0x5783f8)
/usr/local/go/src/runtime/mem_linux.c:149 +0x98
runtime.MHeap_SysAlloc(0x57dae0, 0x3b1bc0000, 0x4296f2)
/usr/local/go/src/runtime/malloc.c:284 +0x124
runtime.MHeap_Alloc(0x57dae0, 0x1d8dda, 0x10100000000, 0x8)
/usr/local/go/src/runtime/mheap.c:240 +0x66
goroutine 1 [running]:
runtime.switchtoM()
/usr/local/go/src/runtime/asm_amd64.s:198 fp=0xc208518a60 sp=0xc208518a58
runtime.mallocgc(0x3b1bb25f0, 0x4d7fc0, 0x0, 0xc20803c0d0)
/usr/local/go/src/runtime/malloc.go:199 +0x9f3 fp=0xc208518b10 sp=0xc208518a60
runtime.newarray(0x4d7fc0, 0x3a164e, 0x1)
/usr/local/go/src/runtime/malloc.go:365 +0xc1 fp=0xc208518b48 sp=0xc208518b10
runtime.makeslice(0x4a52a0, 0x3a164e, 0x3a164e, 0x0, 0x0, 0x0)
/usr/local/go/src/runtime/slice.go:32 +0x15c fp=0xc208518b90 sp=0xc208518b48
github.com/mf/ahocorasick.(*Matcher).buildTrie(0xc2083c7e60, 0xc209860000, 0x26afb, 0x2f555)
/home/go/ahocorasick/ahocorasick.go:104 +0x28b fp=0xc208518d90 sp=0xc208518b90
github.com/mf/ahocorasick.NewStringMatcher(0xc208bd0000, 0x26afb, 0x2d600, 0x8)
/home/go/ahocorasick/ahocorasick.go:222 +0x34b fp=0xc208518ec0 sp=0xc208518d90
main.main()
/home/go/seme/substrings.go:66 +0x257 fp=0xc208518f98 sp=0xc208518ec0
runtime.main()
/usr/local/go/src/runtime/proc.go:63 +0xf3 fp=0xc208518fe0 sp=0xc208518f98
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:2232 +0x1 fp=0xc208518fe8 sp=0xc208518fe0
exit status 2
This is the content of the main function (taken from the same repo: test file)
var dictionary = InitDictionary()
var bytes = []byte(""Partial invoice (€100,000, so roughly 40%) for the consignment C27655 we shipped on 15th August to London from the Make Believe Town depot. INV2345 is for the balance.. Customer contact (Sigourney) says they will pay this on the usual credit terms (30 days).")
var precomputed = ahocorasick.NewStringMatcher(dictionary)// line 66 here
fmt.Println(precomputed.Match(bytes))
Your structure is awfully inefficient in terms of memory, let's look at the internals. But before that, a quick reminder of the space required for some go types:
bool: 1 byte
int: 4 bytes
uintptr: 4 bytes
[N]type: N*sizeof(type)
[]type: 12 + len(slice)*sizeof(type)
Now, let's have a look at your structure:
type node struct {
root bool // 1 byte
b []byte // 12 + len(slice)*1
output bool // 1 byte
index int // 4 bytes
counter int // 4 bytes
child [256]*node // 256*4 = 1024 bytes
fails [256]*node // 256*4 = 1024 bytes
suffix *node // 4 bytes
fail *node // 4 bytes
}
Ok, you should have a guess of what happens here: each node weighs more than 2KB, this is huge ! Finally, we'll look at the code that you use to initialize your trie:
func (m *Matcher) buildTrie(dictionary [][]byte) {
max := 1
for _, blice := range dictionary {
max += len(blice)
}
m.trie = make([]node, max)
// ...
}
You said your dictionary is 4 MB. If it is 4MB in total, then it means that at the end of the for loop, max = 4MB. It it holds 4 MB different words, then max = 4MB*avg(word_length).
We'll take the first scenario, the nicest one. You are initializing a slice of 4M of nodes, each of which uses 2KB. Yup, that makes a nice 8GB necessary.
You should review how you build your trie. From the wikipedia page related to the Aho-Corasick algorithm, each node contains one character, so there is at most 256 characters that go from the root, not 4MB.
Some material to make it right: https://web.archive.org/web/20160315124629/http://www.cs.uku.fi/~kilpelai/BSA05/lectures/slides04.pdf
The node type has a memory size of 2084 bytes.
I wrote a litte program to demonstrate the memory usage: https://play.golang.org/p/szm7AirsDB
As you can see, the three strings (11(+1) bytes in size) dictionary := []string{"fizz", "buzz", "123"} require 24 MB of memory.
If your dictionary has a length of 4 MB you would need about 4000 * 2084 = 8.1 GB of memory.
So you should try to decrease the size of your dictionary.
Set resource limit to unlimited worked for me
if ulimit -a return 0 run ulimit -c unlimited
Maybe set a real size limit to be more secure

Memory Leak in Pango

I am using Pango library alongside Cairo, without GTK, in a test-drive application which I'm currently compiling on MacOSX. I have a memory leakage problem, that I have traced to this function:
void draw_with_cairo (void)
{
PangoLayout *layout;
PangoFontDescription *desc;
int i;
cairo_save (cr);
cairo_scale (cr, 1, -1);
cairo_translate (cr, 0, -HEIGHT);
cairo_translate (cr, 400, 300);
layout = pango_cairo_create_layout (cr);
pango_layout_set_text (layout, "Test", -1);
desc = pango_font_description_from_string ("‌BMitra 32");
pango_layout_set_font_description (layout, desc);
pango_font_description_free (desc);
for (i = 0; i < 12; i++)
{
int width, height;
double angle = iter + (360.0 * i) / 12;
double red;
cairo_save (cr);
red = (1 + cos ((angle - 60) * G_PI / 180.)) / 2;
cairo_set_source_rgb (cr, red, 0, 1.0 - red);
cairo_rotate (cr, angle * G_PI / 180.);
pango_cairo_update_layout (cr, layout);
pango_layout_get_size (layout, &width, &height);
cairo_move_to (cr, - ((double)width / PANGO_SCALE) / 2, - 250);
pango_cairo_show_layout (cr, layout);
cairo_restore (cr);
}
cairo_restore (cr);
g_object_unref (layout);
}
This routine is being called a lot, maybe a hundred times in a second. And the memory leak is huge, around 30MB in 3secs, and has a constant rate. When I compare this code, it seems quite fine to me. I have searched for this, have found many references to memory leaks while using pango in Gtk applications, and they all look for a patch in pango or gtk. I am really puzzled and can't believe there would be such a bug in a heavily used library like pango and think this is a problem with my own code. Any suggestions is appreciated.
This is the vmmap result for Uli's code:
Executing vmmap -resident 25897 | grep TOTAL at beginning of main()
TOTAL 321.3M 126.2M 485
TOTAL 18.0M 200K 1323 173K 0% 2
Executing vmmap -resident 25897 | grep TOTAL after cairo init
TOTAL 331.3M 126.4M 489
TOTAL 27.0M 224K 1327 1155K 4% 6
Executing vmmap -resident 25897 | grep TOTAL after one iteration
TOTAL 383.2M 143.9M 517
TOTAL 37.2M 3368K 18634 3423K 8% 5
Executing vmmap -resident 25897 | grep TOTAL after loop
TOTAL 481.6M 244.1M 514
TOTAL 137.2M 103.7M 151961 66.4M 48% 6
Executing vmmap -resident 25897 | grep TOTAL at end
TOTAL 481.6M 244.1M 520
TOTAL 136.3M 103.1M 151956 65.4M 48% 11
And this is the unfiltered output of the last stage:
Executing vmmap -resident 25751 at end
Process: main [25751]
Path: /PATH/OMITTED/main
Load Address: 0x109b9c000
Identifier: main
Version: ???
Code Type: X86-64
Parent Process: bash [837]
Date/Time: 2016-01-30 23:28:35.866 +0330
Launch Time: 2016-01-30 23:27:35.148 +0330
OS Version: Mac OS X 10.11.2 (15C50)
Report Version: 7
Analysis Tool: /Applications/Xcode.app/Contents/Developer/usr/bin/vmmap
Analysis Tool Version: Xcode 7.0.1 (7A1001)
----
Virtual Memory Map of process 25751 (main)
Output report format: 2.4 -- 64-bit process
VM page size: 4096 bytes
==== Non-writable regions for process 25751
==== Legend
SM=sharing mode:
COW=copy_on_write PRV=private NUL=empty ALI=aliased
SHM=shared ZER=zero_filled S/A=shared_alias
==== Summary for process 25751
ReadOnly portion of Libraries: Total=219.6M resident=112.2M(51%) swapped_out_or_unallocated=107.5M(49%)
Writable regions: Total=155.7M written=5448K(3%) resident=104.1M(67%) swapped_out=0K(0%) unallocated=51.6M(33%)
VIRTUAL RESIDENT REGION
REGION TYPE SIZE SIZE COUNT (non-coalesced)
=========== ======= ======== =======
Activity Tracing 2048K 12K 2
Dispatch continuations 8192K 32K 2
Kernel Alloc Once 8K 8K 3
MALLOC guard page 32K 0K 7
MALLOC metadata 364K 84K 11
MALLOC_LARGE 260K 260K 2 see MALLOC ZONE table below
MALLOC_LARGE (empty) 980K 668K 2 see MALLOC ZONE table below
MALLOC_LARGE metadata 4K 4K 2 see MALLOC ZONE table below
MALLOC_SMALL 32.0M 880K 3 see MALLOC ZONE table below
MALLOC_TINY 104.0M 102.1M 7 see MALLOC ZONE table below
STACK GUARD 56.0M 0K 3
Stack 8264K 60K 3
VM_ALLOCATE 16K 8K 2
__DATA 16.7M 13.6M 217
__IMAGE 528K 104K 2
__LINKEDIT 92.4M 22.5M 34
__TEXT 127.2M 89.6M 220
__UNICODE 552K 476K 2
mapped file 32.2M 13.7M 4
shared memory 328K 172K 10
=========== ======= ======== =======
TOTAL 481.6M 244.3M 518
VIRTUAL RESIDENT ALLOCATION BYTES REGION
MALLOC ZONE SIZE SIZE COUNT ALLOCATED % FULL COUNT
=========== ======= ========= ========= ========= ====== ======
DefaultMallocZone_0x109bd0000 136.3M 103.2M 151952 65.4M 48% 10
GFXMallocZone_0x109bd3000 0K 0K 0 0K 0
=========== ======= ========= ========= ========= ====== ======
TOTAL 136.3M 103.2M 151952 65.4M 48% 10
I have omitted the non-writable regions part because it was overflowing stackoverflow limits!
I don't see any memory leaks. The following program prints its memory usage before and after running your above function 100.000 times. Both numbers are the same for me.
#include <cairo.h>
#include <math.h>
#include <pango/pangocairo.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#define HEIGHT 500
#define WIDTH 500
void draw_with_cairo (cairo_t *cr)
{
PangoLayout *layout;
PangoFontDescription *desc;
int i;
cairo_save (cr);
cairo_scale (cr, 1, -1);
cairo_translate (cr, 0, -HEIGHT);
cairo_translate (cr, 400, 300);
layout = pango_cairo_create_layout (cr);
pango_layout_set_text (layout, "Test", -1);
desc = pango_font_description_from_string ("‌BMitra 32");
pango_layout_set_font_description (layout, desc);
pango_font_description_free (desc);
for (i = 0; i < 12; i++)
{
int width, height;
double angle = i + (360.0 * i) / 12;
double red;
cairo_save (cr);
red = (1 + cos ((angle - 60) * G_PI / 180.)) / 2;
cairo_set_source_rgb (cr, red, 0, 1.0 - red);
cairo_rotate (cr, angle * G_PI / 180.);
pango_cairo_update_layout (cr, layout);
pango_layout_get_size (layout, &width, &height);
cairo_move_to (cr, - ((double)width / PANGO_SCALE) / 2, - 250);
pango_cairo_show_layout (cr, layout);
cairo_restore (cr);
}
cairo_restore (cr);
g_object_unref (layout);
}
static void print_memory_usage(const char *comment)
{
char buffer[1024];
sprintf(buffer, "grep -E VmPeak\\|VmSize /proc/%d/status", getpid());
printf("Executing %s %s\n", buffer, comment);
system(buffer);
}
int main()
{
cairo_surface_t *s;
cairo_t *cr;
int i;
print_memory_usage("at beginning of main()");
s = cairo_image_surface_create(CAIRO_FORMAT_ARGB32, WIDTH, HEIGHT);
cr = cairo_create(s);
print_memory_usage("after cairo init");
draw_with_cairo(cr);
print_memory_usage("after one iteration");
for (i = 0; i < 100 * 1000; i++)
draw_with_cairo(cr);
print_memory_usage("after loop");
cairo_surface_destroy(s);
cairo_destroy(cr);
print_memory_usage("at end");
return 0;
}
Output for me (with no traces of any memory leaks):
Executing grep -E VmPeak\|VmSize /proc/31881/status at beginning of main()
VmPeak: 76660 kB
VmSize: 76660 kB
Executing grep -E VmPeak\|VmSize /proc/31881/status after cairo init
VmPeak: 77640 kB
VmSize: 77640 kB
Executing grep -E VmPeak\|VmSize /proc/31881/status after one iteration
VmPeak: 79520 kB
VmSize: 79520 kB
Executing grep -E VmPeak\|VmSize /proc/31881/status after loop
VmPeak: 79520 kB
VmSize: 79520 kB
Executing grep -E VmPeak\|VmSize /proc/31881/status at end
VmPeak: 79520 kB
VmSize: 78540 kB
P.S.: I tested this on an up-to-date debian testing amd64.

Resources