I am struggling for past day about this particular case in my app.
Throughout the day, application fetches data from the other database (around 400k entries) and processes it. After processing, application maps items to collections inside this application's database and saves them.
Everything goes fine until the end of the function. Currently, I am invoking it through a http request but it is going to be a CRON job. So basically, after hitting the .res() endpoint, the CPU does not slows down on usage and the heap memory is not freeing. I am just wondering what could be happening.
I read a little about identifying memory leaks and found out this. [2] is the processor, that was used to process data.
x[ 1] App Mem: 68 MB CPU: 1 % online xx x
x[ 2] App Mem: 182 MB CPU: 135 % online xx x
x[ 3] App Mem: 60 MB CPU: 0 % online xx x
x[ 4] App Mem: 60 MB CPU: 0 % online xx x
x[ 5] App Mem: 59 MB CPU: 0 % online
As you can clearly see, memory and CPU is way higher than on the other node instances. I thought .res() would kill the task, release memory and slow the cpu down but it doesn't.
I use Expressjs, have undertaken MVC pattern and store all logic in class components. Is there any way to free up this memory as I have tried to apply some sort of cleaning-up function but it did not work :(.
I have started running out of ideas on how to identify what could be the possible problem. Maybe there is something obvious, that I do not pay attention to but to be honest I don't have that much experience in node to find it out.
Related
I was configuring icinga2 to get memory used information from one linux client using script at check_snmp_mem.pl . Any idea how the memory used is derived in this script ?
Here is free command output
# free
total used free shared buff/cache available
Mem: 500016 59160 89564 3036 351292 408972
Swap: 1048572 4092 1044480
where as the performance data shown in icinga dashboard is
Label Value Max Warning Critical
ram_used 137,700.00 500,016.00 470,015.00 490,016.00
swap_used 4,092.00 1,048,572.00 524,286.00 838,858.00
Looking through the source code, it mentions ram_used for example in this line:
$n_output .= " | ram_used=" . ($$resultat{$nets_ram_total}-$$resultat{$nets_ram_free}-$$resultat{$nets_ram_cache}).";";
This strongly suggests that ram_used is calculated as the difference of the total RAM and the free RAM and the RAM used for cache. These values are retrieved via the following SNMP ids:
my $nets_ram_free = "1.3.6.1.4.1.2021.4.6.0"; # Real memory free
my $nets_ram_total = "1.3.6.1.4.1.2021.4.5.0"; # Real memory total
my $nets_ram_cache = "1.3.6.1.4.1.2021.4.15.0"; # Real memory cached
I don't know how they correlate to the output of free. The difference in free memory reported by free and to Icinga is 48136, so maybe you can find that number somewhere.
I am having issues with a large query, that I expect to rely on wrong configs of my postgresql.config. My setup is PostgreSQL 9.6 on Ubuntu 17.10 with 32GB RAM and 3TB HDD. The query is running pgr_dijkstraCost to create an OD-Matrix of ~10.000 points in a network of 25.000 links. Resulting table is thus expected to be very big ( ~100'000'000 rows with columns from, to, costs). However, creating simple test as select x,1 as c2,2 as c3
from generate_series(1,90000000) succeeds.
The query plan:
QUERY PLAN
--------------------------------------------------------------------------------------
Function Scan on pgr_dijkstracost (cost=393.90..403.90 rows=1000 width=24)
InitPlan 1 (returns $0)
-> Aggregate (cost=196.82..196.83 rows=1 width=32)
-> Seq Scan on building_nodes b (cost=0.00..166.85 rows=11985 width=4)
InitPlan 2 (returns $1)
-> Aggregate (cost=196.82..196.83 rows=1 width=32)
-> Seq Scan on building_nodes b_1 (cost=0.00..166.85 rows=11985 width=4)
This leads to a crash of PostgreSQL:
WARNING: terminating connection because of crash of another server process
DETAIL: The postmaster has commanded this server process to roll back the
current transaction and exit, because another server process exited
normally and possibly corrupted shared memory.
Running dmesg I could trace it down to be an Out of memory issue:
Out of memory: Kill process 5630 (postgres) score 949 or sacrifice child
[ 5322.821084] Killed process 5630 (postgres) total-vm:36365660kB,anon-rss:32344260kB, file-rss:0kB, shmem-rss:0kB
[ 5323.615761] oom_reaper: reaped process 5630 (postgres), now anon-rss:0kB,file-rss:0kB, shmem-rss:0kB
[11741.155949] postgres invoked oom-killer: gfp_mask=0x14201ca(GFP_HIGHUSER_MOVABLE|__GFP_COLD), nodemask=(null), order=0, oom_score_adj=0
[11741.155953] postgres cpuset=/ mems_allowed=0
When running the query I also can observe with topthat my RAM is going down to 0 before the crash. The amount of committed memory just before the crash:
$grep Commit /proc/meminfo
CommitLimit: 18574304 kB
Committed_AS: 42114856 kB
I would expect the HDD is used to write/buffer temporary data, when RAM is not enough. But the available space on my hdd does not change during the processing. So I began to dig for missing configs (expecting issues due to my relocated data-directory) and following different sites:
https://www.postgresql.org/docs/current/static/kernel-resources.html#LINUX-MEMORY-OVERCOMMIT
https://www.credativ.com/credativ-blog/2010/03/postgresql-and-linux-memory-management
My original settings of postgresql.conf are default except for changes in the data-directory:
data_directory = '/hdd_data/postgresql/9.6/main'
shared_buffers = 128MB # min 128kB
#huge_pages = try # on, off, or try
#temp_buffers = 8MB # min 800kB
#max_prepared_transactions = 0 # zero disables the feature
#work_mem = 4MB # min 64kB
#maintenance_work_mem = 64MB # min 1MB
#replacement_sort_tuples = 150000 # limits use of replacement selection sort
#autovacuum_work_mem = -1 # min 1MB, or -1 to use maintenance_work_mem
#max_stack_depth = 2MB # min 100kB
dynamic_shared_memory_type = posix # the default is the first option
I changed the config:
shared_buffers = 128MB
work_mem = 40MB # min 64kB
maintenance_work_mem = 64MB
Relaunched with sudo service postgresql reload and tested the same query, but found no change in behavior. Does this simply mean, such a large query can not be done? Any help appreciated.
I'm having similar trouble, but not with PostgreSQL (which is running happily): what is happening is simply that the kernel cannot allocate more RAM to the process, whichever process it is.
It would certainly help to add some swap to your configuration.
To check how much RAM and swap you have, run: free -h
On my machine, here is what it returns:
total used free shared buff/cache available
Mem: 7.7Gi 5.3Gi 928Mi 865Mi 1.5Gi 1.3Gi
Swap: 9.4Gi 7.1Gi 2.2Gi
You can clearly see that my machine is quite overloaded: about 8Gb of RAM, and 9Gb of swap, from which 7 are used.
When the RAM-hungry process got killed after Out of memory, I saw both RAM and swap being used at 100%.
So, allocating more swap may improve our problems.
I run a website with 5 httpd servers(Centos 7) in EC2, type is m3.2xlarge.
The servers are configured with load balancer.
Gradually the server memory keeps going higher in all the instances.
For example:
Memory usage in few seconds after restarting the httpd service:
[centos#ip-10-0-1-77 ~]$ while sleep 1; do free -m; done
total used free shared buff/cache available
Mem: 29741 2700 26732 36 307 26728
Swap: 0 0 0
total used free shared buff/cache available
Mem: 29741 2781 26651 36 307 26647
Swap: 0 0 0
total used free shared buff/cache available
Mem: 29741 2820 26613 36 307 26609
Swap: 0 0 0
[centos#ip-10-0-1-77 ~]$
.
.
.
This is what i see after an hour:
[centos#ip-10-0-1-77 ~]$ free -m
total used free shared buff/cache available
Mem: 29741 29092 363 41 284 346
Swap: 0 0 0
Like above it goes and consumes all the memory(30GB) within an hour.
To avoid this I started using worker mpm configuration.
The following configuration is what I have added at the bottom of /etc/httpd/httpd.conf.
<IfModule mpm_worker_module>
MaxRequestWorkers 2500
MaxSpareThreads 250
MinSpareThreads 75
ServerLimit 100
StartServers 3
ThreadsPerChild 25
</IfModule>
Can Someone help and suggest me the right configuration to utilize the RAM memory properly in all the instances?
A standard Apache process takes up about 12 MB of RAM. If you have 30 GB reserved for Apache you will never reach that with a serverlimit of 100 (=100*12MB=1200MB=1,2GB). So I assume that Apache isn't taking up all that memory.
Is there an application that is involved or a DB? Those can take up larger amounts of RAM.
For your servertuning.conf (or httpd.conf since you put it there):
<IfModule mpm_worker_module>
#max amount of requests one worker handles before it's forced to close, 2,5k seems almost a little low
MaxRequestsperChild 2500
#maximum number of worker threads which are kept spare
#250 seems quite high, but really depends on the traffic you are experiencing, we normally don't use more than 50-75
MaxSpareThreads 250
#minimum number of worker threads which are kept spare
#really high, only useful if you often experience higher bursts of traffic
#otherwise you should be fine with 15-30, maybe 50 if you experience higher fluctuation -> bigger bursts of requests
MinSpareThreads 75
#upper limit on the configurable number of threads per child process
#you have to increase this if you want more than 64 ThreadsPerChild
ThreadLimit 64
#maximum number of simultaneous client connections
#that is really low! --> increase that, definitely! We run on 1000, so about 12GB max
MaxClients 100
#initial number of server processes to start
#3 is really low, if you expected your server to be flodded with requests the second you start it
#maybe turn it up a little to around 20 or even 50 if you receive lots of traffic right after a restart
StartServers 3
#number of worker threads created by each child proces
#25 threads per worker is not tooo much, but at some point the administration of xx threads gets more expensive than creating new ones
#would suggest to leave it at 25 or turn it up to around 40
ThreadsPerChild 25
</IfModule>
Notice that I changed ServerLimit to MaxClients and MaxRequestWorkers to MaxRequestsPerChild, because as far as I know those are the terms used in mpm-worker.
Additionally you can change following variables:
#KeepAlive: Whether or not to allow persistent connections (more than
#one request per connection). Set to "Off" to deactivate.
#if it's on, leave it there
KeepAlive On
#MaxKeepAliveRequests: The maximum number of requests to allow
#during a persistent connection. Set to 0 to allow an unlimited amount.
#We recommend you leave this number high, for maximum performance.
#default=100, but you can turn that up if your sites contain a lot of item (img, css, ...)
#we are using about 20*<average object-count per site> = 600
MaxKeepAliveRequests 600
#KeepAliveTimeout: Number of seconds to wait for the next request from the
#same client on the same connection.
#would recommend to decrease that, otherwise you could become a victim of slow-dos attacks
#default is 15, we are running just fine on 5
KeepAliveTimeout 5
To further prevent slow-dos or piling up of open sessions, you can use mod_reqtimeout:
<IfModule mod_reqtimeout.c>
# allow 10s timeout for the headers and allow 1s more until 20s upon receipt of 1000 bytes.
# almost the same with the body, except that it is tricky to
# limit the request timeout within the body at all - it may take time to generate the body.
# below are the default values
#RequestReadTimeout header=10-20,MinRate=1000 body=20,MinRate=1000
# and this is how I'd set them with today's internet speed
# deduct the according numbers from explanation above...
RequestReadTimeout header=2-5,MinRate=100000 body=5-10,MinRate=1000000
</IfModule>
If that's not enough to help your RAM-issues (if they are really caused by Apache), use the tools of your server's OS accordingly to find out what is taking up all the RAM --> TOOLS
On inspecting a crash dump file for an out of memory exception reported by a client the results of !DumpHeap -stat showed that 575MB of memory is being taken up by 45,000 objects of type "Free" most of which I assume would have to reside in Gen 2 due to the size.
The first places I looked for problems were the large object heap (LOH) and pinned objects. The large object heap with free space included was only 70MB so that wasn't the issue and running !gchandles showed:
GC Handle Statistics:
Strong Handles: 155
Pinned Handles: 265
Async Pinned Handles: 8
Ref Count Handles: 163
Weak Long Handles: 0
Weak Short Handles: 0
Other Handles: 0
which is a very small number of handles (around 600) compared to the number of free objects (45,000). To me this rules out the free blocks being caused by pinning.
I also looked into the free blocks themselves to see if maybe they had a consistent size, but on inspection the sizes varied widely and went from just short of 5MB to only around 12 bytes or so.
Any help would be appreciated! I am at a loss since there is fragmentation but no signs of it being cause by the two places that I know to look which are the large object heap (LOH) and pinned handles.
In which generation are the free objects?
I assume would have to reside in Gen 2 due to the size
Size is not related to generations. To find out in which generation the free blocks reside, you can follow these steps:
From !dumpheap -stat -type Free get the method table:
0:003> !dumpheap -stat -type Free
total 7 objects
Statistics:
MT Count TotalSize Class Name
00723538 7 100 Free
Total 7 objects
From !eeheap -gc, get the start addresses of the generations.
0:003> !eeheap -gc
Number of GC Heaps: 1
generation 0 starts at 0x026a1018
generation 1 starts at 0x026a100c
generation 2 starts at 0x026a1000
ephemeral segment allocation context: none
segment begin allocated size
026a0000 026a1000 02731ff4 0x00090ff4(593908)
Large object heap starts at 0x036a1000
segment begin allocated size
036a0000 036a1000 036a3250 0x00002250(8784)
Total Size 0x93244(602692)
------------------------------
GC Heap Size 0x93244(602692)
Then dump the free objects only of a specific generation by passing the start and end address (e.g. of generation 2 here):
0:003> !dumpheap -mt 00723538 0x026a1000 0x026a100c
Address MT Size
026a1000 00723538 12 Free
026a100c 00723538 12 Free
total 2 objects
Statistics:
MT Count TotalSize Class Name
00723538 2 24 Free
Total 2 objects
So in my simple case, there are 2 free objects in generation 2.
Pinned handles
600 pinned objects should not cause 45.000 free memory blocks. Still from my experience, 600 pinned handles are a lot. But first, check in which generation the free memory blocks reside.
I have lots of memory allocated in my mono droid program which doesn't appear to directly belong to either the Dalvik or the Mono heaps. Also, I can't figure out how to track .NET memory leaks.
When I call
adb shell dumpsys meminfo MyProgram.Droid
This is the output:
** MEMINFO in pid 1364 [MyProgram.Droid] **
Shared Private Heap Heap Heap
Pss Dirty Dirty Size Alloc Free
------ ------ ------ ------ ------ ------
Native 36 24 36 38080 37775 124
Dalvik 6934 15164 6572 16839 15384 1455
Cursor 0 0 0
Ashmem 0 0 0
Other dev 4 36 0
.so mmap 12029 2416 9068
.jar mmap 0 0 0
.apk mmap 16920 0 0
.ttf mmap 3 0 0
.dex mmap 2299 296 8
Other mmap 64 24 36
Unknown 28920 8216 28728
TOTAL 67209 26176 44448 54919 53159 1579
I assume that the "Unknown" section is the mono framework, including the .NET heap. However when I call
GC.GetTotalMemory(true)
It tells me that only 5Mb of memory is allocated. That leaves 23Mb which I cannot trace (and there's 38Mb of allocated native heap)
Additionally, I don't see that Xamarin have any tools for tracking .NET memory leaks. I've added garbage collection logging with
adb shell setprop debug.mono.log gc,gref
But this is incredibly verbose and hard to read and doesn't even include allocation sizes.
At this point I don't know what to do to track down the resulting leaks. Since the allocations appear to be on the native heap, do I need to use the NDK in order to track down what's going on? Are there any tools I can use on the C# side to track the .NET leaks?
Thanks!