memory behavior on Amazon Linux AMI release 2018.03 - memory-leaks
We observe an increasing memory usage of our ec2 instances over time.
After two weeks we have to reboot our systems.
On this machines run some docker containers. Let's have a look
with 'free -m' after 14 days(I stopped the docker daemon now):
$free -m
total used free shared buffers cached
Mem: 7977 7852 124 0 4 573
-/+ buffers/cache: 7273 703
Swap: 0 0 0
Now I run 'ps_mem':
Private + Shared = RAM used Program
124.0 KiB + 64.5 KiB = 188.5 KiB agetty
140.0 KiB + 60.5 KiB = 200.5 KiB acpid
180.0 KiB + 41.5 KiB = 221.5 KiB rngd
200.0 KiB + 205.5 KiB = 405.5 KiB lvmpolld
320.0 KiB + 89.5 KiB = 409.5 KiB irqbalance
320.0 KiB + 232.5 KiB = 552.5 KiB lvmetad
476.0 KiB + 99.5 KiB = 575.5 KiB auditd
624.0 KiB + 105.5 KiB = 729.5 KiB init
756.0 KiB + 72.5 KiB = 828.5 KiB crond
292.0 KiB + 622.5 KiB = 914.5 KiB udevd (3)
560.0 KiB + 377.0 KiB = 937.0 KiB mingetty (6)
1.0 MiB + 194.5 KiB = 1.2 MiB ntpd
1.1 MiB + 256.0 KiB = 1.4 MiB dhclient (2)
2.5 MiB + 103.5 KiB = 2.6 MiB rsyslogd
3.1 MiB + 259.0 KiB = 3.4 MiB sendmail.sendmail (2)
3.0 MiB + 609.0 KiB = 3.6 MiB sudo (2)
3.6 MiB + 1.6 MiB = 5.2 MiB bash (5)
2.9 MiB + 4.3 MiB = 7.2 MiB sshd (9)
14.5 MiB + 413.5 KiB = 14.9 MiB dmeventd
---------------------------------
45.4 MiB
=================================
Now I try to allocate new memory with the 'stress' tool(http://people.seas.harvard.edu/~apw/stress/):
$ stress --vm 1 --vm-bytes 1G --timeout 10s --verbose
stress: info: [11120] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
stress: dbug: [11120] using backoff sleep of 3000us
stress: dbug: [11120] setting timeout to 10s
stress: dbug: [11120] --> hogvm worker 1 [11121] forked
stress: dbug: [11121] allocating 1073741824 bytes ...
stress: FAIL: [11121] (494) hogvm malloc failed: Cannot allocate memory
stress: FAIL: [11120] (394) <-- worker 11121 returned error 1
stress: WARN: [11120] (396) now reaping child worker processes
stress: FAIL: [11120] (451) failed run completed in 0s
==> 'stress' is not able to allocate 1G of new memory.
But I do not understand where all my memory is gone?!
Here comes the output of 'top' (similiar to ps_mem):
Tasks: 107 total, 1 running, 66 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 8168828k total, 8045784k used, 123044k free, 5656k buffers
Swap: 0k total, 0k used, 0k free, 589372k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2030 root 20 0 102m 17m 5812 S 0.0 0.2 1:21.36 dmeventd
11145 root 20 0 82664 6604 5752 S 0.0 0.1 0:00.00 sshd
11130 root 20 0 183m 4472 3824 S 0.0 0.1 0:00.00 sudo
18339 ec2-user 20 0 114m 3896 1744 S 0.0 0.0 0:00.08 bash
2419 root 20 0 241m 3552 1188 S 0.0 0.0 2:07.85 rsyslogd
11146 sshd 20 0 80588 3440 2612 S 0.0 0.0 0:00.00 sshd
11131 root 20 0 112m 3288 2924 S 0.0 0.0 0:00.00 bash
17134 root 20 0 117m 3084 2008 S 0.0 0.0 0:00.00 sshd
17148 ec2-user 20 0 112m 2992 2620 S 0.0 0.0 0:00.01 bash
2605 root 20 0 85496 2776 1064 S 0.0 0.0 0:21.44 sendmail
2614 smmsp 20 0 81088 2704 1208 S 0.0 0.0 0:00.17 sendmail
15228 root 20 0 112m 2632 2228 S 0.0 0.0 0:00.02 bash
1 root 20 0 19684 2376 2068 S 0.0 0.0 0:01.91 init
2626 root 20 0 118m 2276 1644 S 0.0 0.0 0:02.45 crond
2233 root 20 0 9412 2244 1748 S 0.0 0.0 0:00.49 dhclient
11147 root 20 0 15364 2176 1856 R 0.0 0.0 0:00.00 top
2584 ntp 20 0 113m 2128 1308 S 0.0 0.0 0:49.60 ntpd
Where are these damned 7273MB memory consumed?
cat /proc/meminfo
MemTotal: 8168828 kB
MemFree: 129736 kB
MemAvailable: 567464 kB
Buffers: 5116 kB
Cached: 585504 kB
SwapCached: 0 kB
Active: 476920 kB
Inactive: 130228 kB
Active(anon): 22340 kB
Inactive(anon): 80 kB
Active(file): 454580 kB
Inactive(file): 130148 kB
Unevictable: 17620 kB
Mlocked: 17620 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 16 kB
Writeback: 0 kB
AnonPages: 34088 kB
Mapped: 14668 kB
Shmem: 80 kB
Slab: 5625876 kB
SReclaimable: 142240 kB
SUnreclaim: 5483636 kB
KernelStack: 2016 kB
PageTables: 4384 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 4084412 kB
Committed_AS: 109856 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 0 kB
VmallocChunk: 0 kB
AnonHugePages: 0 kB
ShmemHugePages: 0 kB
ShmemPmdMapped: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 7286784 kB
DirectMap2M: 1101824 kB
Output of slabtop
Active / Total Objects (% used) : 8445426 / 11391340 (74.1%)
Active / Total Slabs (% used) : 533926 / 533926 (100.0%)
Active / Total Caches (% used) : 78 / 101 (77.2%)
Active / Total Size (% used) : 5033325.10K / 5414048.91K (93.0%)
Minimum / Average / Maximum Object : 0.01K / 0.47K / 9.44K
OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
3216990 525372 16% 0.09K 76595 42 306380K kmalloc-96
3101208 3101011 99% 1.00K 219166 32 7013312K kmalloc-1024
2066976 2066841 99% 0.32K 86124 24 688992K taskstats
1040384 1039935 99% 0.03K 8128 128 32512K kmalloc-32
1038080 1037209 99% 0.06K 16220 64 64880K kmalloc-64
516719 516719 100% 2.09K 113785 15 3641120K request_queue
223356 22610 10% 0.57K 7977 28 127632K radix_tree_node
52740 39903 75% 0.13K 1758 30 7032K kernfs_node_cache
Now I rebooted the machine and did a 'perf kmem record --caller'. After some seconds I had to cancel because the file data.perf was already over 1GB lager. I did a 'perf kmem stat --caller' and here comes the output:
---------------------------------------------------------------------------------------------------------
Callsite | Total_alloc/Per | Total_req/Per | Hit | Ping-pong | Frag
---------------------------------------------------------------------------------------------------------
dm_open+2b | 240/8 | 120/4 | 30 | 0 | 50,000%
match_number+2a | 120/8 | 60/4 | 15 | 0 | 50,000%
rebuild_sched_domains_locked+dd | 72/8 | 36/4 | 9 | 0 | 50,000%
dm_btree_del+2b | 40960/4096 | 20720/2072 | 10 | 0 | 49,414%
sk_prot_alloc+7c | 86016/2048 | 44016/1048 | 42 | 0 | 48,828%
hugetlb_cgroup_css_alloc+29 | 2560/512 | 1320/264 | 5 | 0 | 48,438%
blk_throtl_init+2a | 15360/1024 | 8040/536 | 15 | 0 | 47,656%
bpf_int_jit_compile+6e | 40960/8192 | 21440/4288 | 5 | 0 | 47,656%
mem_cgroup_css_alloc+2f | 10240/2048 | 5360/1072 | 5 | 2 | 47,656%
alloc_disk_node+32 | 30720/2048 | 16560/1104 | 15 | 0 | 46,094%
mem_cgroup_css_alloc+166 | 5120/1024 | 2800/560 | 5 | 2 | 45,312%
blkcg_css_alloc+3b | 2560/512 | 1400/280 | 5 | 0 | 45,312%
kobject_uevent_env+be | 1224704/4096 | 698464/2336 | 299 | 0 | 42,969%
uevent_show+81 | 675840/4096 | 385440/2336 | 165 | 0 | 42,969%
blkg_alloc+3c | 40960/1024 | 23680/592 | 40 | 0 | 42,188%
dm_table_create+34 | 7680/512 | 4560/304 | 15 | 0 | 40,625%
journal_init_common+34 | 30720/2048 | 18360/1224 | 15 | 0 | 40,234%
throtl_pd_alloc+2b | 56320/1024 | 34320/624 | 55 | 0 | 39,062%
strndup_user+3f | 14496/17 | 8917/10 | 829 | 0 | 38,486%
alloc_trial_cpuset+19 | 14336/1024 | 8848/632 | 14 | 0 | 38,281%
cpuset_css_alloc+29 | 5120/1024 | 3160/632 | 5 | 0 | 38,281%
proc_reg_open+33 | 48768/64 | 30480/40 | 762 | 0 | 37,500%
get_mountpoint+73 | 26432/64 | 16520/40 | 413 | 0 | 37,500%
alloc_pipe_info+aa | 219136/1024 | 136960/640 | 214 | 12 | 37,500%
alloc_fair_sched_group+f0 | 38400/512 | 24000/320 | 75 | 0 | 37,500%
__alloc_workqueue_key+77 | 15360/512 | 9600/320 | 30 | 0 | 37,500%
newary+69 | 15360/512 | 9600/320 | 30 | 0 | 37,500%
disk_expand_part_tbl+74 | 960/64 | 600/40 | 15 | 0 | 37,500%
alloc_dax+29 | 120/8 | 75/5 | 15 | 0 | 37,500%
kernfs_mount_ns+3c | 320/64 | 200/40 | 5 | 0 | 37,500%
bucket_table_alloc+be | 16640/978 | 10496/617 | 17 | 12 | 36,923%
__alloc_workqueue_key+250 | 7680/512 | 4920/328 | 15 | 0 | 35,938%
journal_init_common+1b9 | 61440/4096 | 40920/2728 | 15 | 0 | 33,398%
kernfs_fop_write+b3 | 2248/11 | 1507/7 | 191 | 0 | 32,963%
__alloc_skb+72 | 3698176/876 | 2578048/611 | 4217 | 115 | 30,289%
alloc_pid+33 | 80896/128 | 56944/90 | 632 | 43 | 29,608%
alloc_pipe_info+3d | 41088/192 | 29104/136 | 214 | 12 | 29,167%
device_create_groups_vargs+59 | 15360/1024 | 10920/728 | 15 | 0 | 28,906%
sget_userns+ee | 112640/2048 | 80960/1472 | 55 | 8 | 28,125%
key_alloc+13e | 480/96 | 350/70 | 5 | 0 | 27,083%
load_elf_phdrs+49 | 153600/602 | 113176/443 | 255 | 0 | 26,318%
alloc_vfsmnt+aa | 11752/22 | 8765/17 | 513 | 130 | 25,417%
__memcg_init_list_lru_node+6b | 35200/32 | 26400/24 | 1100 | 160 | 25,000%
proc_self_get_link+96 | 12352/16 | 9264/12 | 772 | 0 | 25,000%
memcg_kmem_get_cache+9e | 46336/64 | 34752/48 | 724 | 0 | 25,000%
kernfs_fop_open+286 | 45056/64 | 33792/48 | 704 | 0 | 25,000%
insert_shadow+27 | 16544/32 | 12408/24 | 517 | 3 | 25,000%
allocate_cgrp_cset_links+70 | 28800/64 | 21600/48 | 450 | 0 | 25,000%
ext4_ext_remove_space+8db | 12352/64 | 9264/48 | 193 | 0 | 25,000%
dev_exception_add+25 | 5760/64 | 4320/48 | 90 | 0 | 25,000%
mempool_create_node+4e | 8160/96 | 6120/72 | 85 | 0 | 25,000%
alloc_rt_sched_group+11d | 7200/96 | 5400/72 | 75 | 0 | 25,000%
copy_semundo+60 | 2400/32 | 1800/24 | 75 | 7 | 25,000%
ext4_readdir+825 | 3264/64 | 2448/48 | 51 | 0 | 25,000%
alloc_worker+1d | 8640/192 | 6480/144 | 45 | 0 | 25,000%
alloc_workqueue_attrs+27 | 1440/32 | 1080/24 | 45 | 0 | 25,000%
ext4_fill_super+57 | 30720/2048 | 23040/1536 | 15 | 0 | 25,000%
apply_wqattrs_prepare+32 | 960/64 | 720/48 | 15 | 0 | 25,000%
inotify_handle_event+68 | 960/64 | 720/48 | 15 | 1 | 25,000%
blk_alloc_queue_stats+1b | 480/32 | 360/24 | 15 | 0 | 25,000%
proc_self_get_link+57 | 160/16 | 120/12 | 10 | 0 | 25,000%
disk_seqf_start+25 | 256/32 | 192/24 | 8 | 0 | 25,000%
memcg_write_event_control+8a | 960/192 | 720/144 | 5 | 0 | 25,000%
eventfd_file_create.part.3+28 | 320/64 | 240/48 | 5 | 0 | 25,000%
do_seccomp+249 | 160/32 | 120/24 | 5 | 0 | 25,000%
mem_cgroup_oom_register_event+29 | 160/32 | 120/24 | 5 | 0 | 25,000%
bucket_table_alloc+32 | 512/512 | 384/384 | 1 | 0 | 25,000%
__kernfs_new_node+25 | 42424/33 | 32046/24 | 1284 | 2 | 24,463%
single_open_size+2f | 45056/4096 | 35024/3184 | 11 | 0 | 22,266%
alloc_fdtable+ae | 544/90 | 424/70 | 6 | 0 | 22,059%
__register_sysctl_paths+10f | 2304/256 | 1800/200 | 9 | 0 | 21,875%
pskb_expand_head+71 | 10240/2048 | 8000/1600 | 5 | 0 | 21,875%
cpuacct_css_alloc+28 | 1280/256 | 1000/200 | 5 | 0 | 21,875%
shmem_symlink+a5 | 1440/13 | 1135/10 | 105 | 1 | 21,181%
kernfs_fop_open+d5 | 135168/192 | 107008/152 | 704 | 0 | 20,833%
mb_cache_create+2c | 2880/192 | 2280/152 | 15 | 0 | 20,833%
crypto_create_tfm+32 | 1440/96 | 1140/76 | 15 | 0 | 20,833%
bpf_prog_alloc+9d | 960/192 | 760/152 | 5 | 0 | 20,833%
pidlist_array_load+172 | 768/192 | 608/152 | 4 | 0 | 20,833%
cgroup_mkdir+ca | 46080/1024 | 36540/812 | 45 | 2 | 20,703%
__proc_create+a1 | 17280/192 | 13740/152 | 90 | 0 | 20,486%
__nf_conntrack_alloc+4e | 20800/320 | 16640/256 | 65 | 2 | 20,000%
devcgroup_css_alloc+1b | 1280/256 | 1040/208 | 5 | 0 | 18,750%
ext4_htree_store_dirent+35 | 27584/77 | 22770/64 | 354 | 0 | 17,452%
copy_ipcs+63 | 5120/1024 | 4240/848 | 5 | 4 | 17,188%
__list_lru_init+225 | 10560/96 | 8800/80 | 110 | 16 | 16,667%
device_private_init+1f | 5760/192 | 4800/160 | 30 | 0 | 16,667%
alloc_rt_sched_group+ef | 153600/2048 | 129600/1728 | 75 | 0 | 15,625%
ext4_fill_super+2907 | 1920/128 | 1620/108 | 15 | 0 | 15,625%
__d_alloc+169 | 107648/115 | 91360/97 | 934 | 0 | 15,131%
copy_utsname+85 | 2560/512 | 2200/440 | 5 | 4 | 14,062%
kobject_set_name_vargs+1e | 11904/66 | 10261/57 | 179 | 33 | 13,802%
kasprintf+3a | 11744/91 | 10196/79 | 129 | 33 | 13,181%
prepare_creds+21 | 191808/192 | 167832/168 | 999 | 31 | 12,500%
__seq_open_private+1c | 16896/64 | 14784/56 | 264 | 0 | 12,500%
start_this_handle+2da | 29440/256 | 25760/224 | 115 | 90 | 12,500%
load_elf_binary+1e8 | 3520/32 | 3080/28 | 110 | 0 | 12,500%
alloc_fair_sched_group+11d | 38400/512 | 33600/448 | 75 | 0 | 12,500%
__kthread_create_on_node+5e | 3840/64 | 3360/56 | 60 | 0 | 12,500%
wb_congested_get_create+86 | 2560/64 | 2240/56 | 40 | 0 | 12,500%
bioset_create+2e | 3840/128 | 3360/112 | 30 | 0 | 12,500%
kobj_map+83 | 960/64 | 840/56 | 15 | 0 | 12,500%
ext4_mb_init+54 | 960/64 | 840/56 | 15 | 0 | 12,500%
ext4_mb_init+2c | 480/32 | 420/28 | 15 | 0 | 12,500%
alloc_fdtable+4b | 384/64 | 336/56 | 6 | 0 | 12,500%
unix_bind+1a2 | 640/128 | 560/112 | 5 | 0 | 12,500%
kobject_get_path+56 | 30016/100 | 26522/88 | 299 | 0 | 11,640%
__register_sysctl_table+51 | 20448/151 | 18288/135 | 135 | 0 | 10,563%
create_cache+3e | 45696/384 | 40936/344 | 119 | 33 | 10,417%
kvmalloc_node+3e | 2113536/25161 | 1897896/22594 | 84 | 0 | 10,203%
dev_create+ab | 1440/96 | 1300/86 | 15 | 0 | 9,722%
__anon_vma_prepare+d2 | 290576/88 | 264160/80 | 3302 | 81 | 9,091%
anon_vma_fork+5e | 166672/88 | 151520/80 | 1894 | 187 | 9,091%
kthread+3f | 5760/96 | 5280/88 | 60 | 0 | 8,333%
thin_ctr+6f | 2880/192 | 2640/176 | 15 | 0 | 8,333%
sock_alloc_inode+18 | 492096/704 | 452952/648 | 699 | 11 | 7,955%
jbd2_journal_add_journal_head+67 | 48480/120 | 45248/112 | 404 | 1 | 6,667%
do_execveat_common.isra.31+c0 | 37120/256 | 34800/240 | 145 | 6 | 6,250%
kernfs_iattrs.isra.4+59 | 10112/128 | 9480/120 | 79 | 0 | 6,250%
shmem_fill_super+25 | 3200/128 | 3000/120 | 25 | 0 | 6,250%
bdi_alloc_node+2a | 15360/1024 | 14400/960 | 15 | 0 | 6,250%
alloc_mnt_ns+54 | 1152/128 | 1080/120 | 9 | 4 | 6,250%
alloc_fair_sched_group+29 | 640/128 | 600/120 | 5 | 0 | 6,250%
alloc_fair_sched_group+4e | 640/128 | 600/120 | 5 | 0 | 6,250%
alloc_rt_sched_group+29 | 640/128 | 600/120 | 5 | 0 | 6,250%
alloc_rt_sched_group+4d | 640/128 | 600/120 | 5 | 0 | 6,250%
__register_sysctl_table+434 | 43776/270 | 41157/254 | 162 | 0 | 5,983%
__kernfs_new_node+42 | 839528/136 | 790144/128 | 6173 | 1025 | 5,882%
mqueue_alloc_inode+16 | 4800/960 | 4520/904 | 5 | 4 | 5,833%
bpf_prepare_filter+24b | 40960/8192 | 38920/7784 | 5 | 0 | 4,980%
bpf_convert_filter+57 | 20480/4096 | 19460/3892 | 5 | 0 | 4,980%
bpf_prepare_filter+111 | 10240/2048 | 9730/1946 | 5 | 0 | 4,980%
ep_alloc+3d | 24576/192 | 23552/184 | 128 | 11 | 4,167%
inet_twsk_alloc+3a | 1736/248 | 1680/240 | 7 | 0 | 3,226%
mm_alloc+16 | 296960/2048 | 290000/2000 | 145 | 10 | 2,344%
copy_process.part.40+9e6 | 202752/2048 | 198000/2000 | 99 | 11 | 2,344%
mempool_create_node+f3 | 235920/425 | 230520/415 | 555 | 0 | 2,289%
dax_alloc_inode+16 | 11520/768 | 11280/752 | 15 | 0 | 2,083%
bdev_alloc_inode+16 | 12480/832 | 12240/816 | 15 | 0 | 1,923%
ext4_find_extent+290 | 423552/99 | 415440/97 | 4243 | 0 | 1,915%
radix_tree_node_alloc.constprop.19+78 | 5331336/584 | 5258304/576 | 9129 | 8 | 1,370%
alloc_inode+66 | 528352/608 | 521400/600 | 869 | 20 | 1,316%
proc_alloc_inode+16 | 928200/680 | 917280/672 | 1365 | 45 | 1,176%
copy_process.part.40+95d | 483648/2112 | 478152/2088 | 229 | 8 | 1,136%
shmem_alloc_inode+16 | 326096/712 | 322432/704 | 458 | 6 | 1,124%
copy_process.part.40+10fe | 234496/1024 | 232664/1016 | 229 | 12 | 0,781%
ext4_alloc_inode+17 | 647360/1088 | 642600/1080 | 595 | 1 | 0,735%
__vmalloc_node_range+d3 | 13192/30 | 13112/30 | 426 | 36 | 0,606%
sk_prot_alloc+2f | 544768/1127 | 542080/1122 | 483 | 6 | 0,493%
...
SUMMARY (SLAB allocator)
========================
Total bytes requested: 818.739.691
Total bytes allocated: 821.951.696
Total bytes freed: 763.705.848
Net total bytes allocated: 58.245.848
Total bytes wasted on internal fragmentation: 3.212.005
Internal fragmentation: 0,390778%
Cross CPU allocations: 28.844/10.157.339
Related
Change x:y to x ↵ y
I have the following problem: In my very large excel spreadsheet, I have a few rows that look like this: Memo: IBRD 0 0 0 0 0 0 0 0 0 0 IDA 0 0 0 0 0 0 0 0 0 0 and a few others that look like this: Memo: IBRD 0 0 0 0 0 0 0 0 0 0 IDA 0 0 0 0 0 0 0 0 0 0 I want them to be uniform and change all those that exhibit the first format to the SECOND one (where Memo: empty; and the next row is IBRD with the data entries). I have found the following VBA code that might do the trick but I am not 100% how to apply this to my problem here. Sub splitOneCellIntoMore() Dim R As Range Dim I As Range Dim O As Range wTitle = "splitOneCellIntoMoreCells" Set I = Application.Selection.Range("B") Set I = Application.InputBox("Select the Cell B1 that contains the text you want to split:", wTitle, I.Address, Type:=8) Set O = Application.InputBox("Select destination cell you want to paste your split data:", wTitle, Type:=8) A = VBA.Split(I.Value, ":") O.Resize(UBound(A) - LBound(A) + 1).Value = Application.Transpose(A) End Sub (I took the code from: https://www.excelhow.net/split-one-cell-into-two-or-more-cells.html) This is the table I am using. | DEBT OUTSTANDING(LDOD) | | | | 36 | 330 | 405 | 412 | 527 | | ------------------------------------------------------------------- |--|---|---|---|---|---|---|---| | Public and publicly e;uaranteed | | | | 36 | 330 | 405 | 412 | 506 | | Official creditors | | | | 2 | 305 | 376 | 385 | 480 | | Multilateral | | | | 0 | 115 | 151 | 164 | 242 | | Concessional I | | | | 0 | 110 | 142 | 155 | 232 | | Bilateral | | | | 2 | 190 | 225 | 221 | 238 | | Concessional | | | | 2 | 151 | 189 | 190 | 212 | | Private creditors | | | | 34 | 25 | 29 | 27 | 26 | | Bonds | | | | 0 | 0 | 0 | 0 | 0 | | Commercial banks | | | | 0 | 0 | 0 | 0 | 0 | | Other private | | | | 34 | 25 | 29 | 27 | 26 | | Privatenone;uaranteed | | | | 0 | 0 | 0 | 0 | 21 | | Bonds | | | | 0 | 0 | 0 | 0 | 0 | | Commercial banks and other | | | | 0 | 0 | 0 | 0 | 21 | | Memo: IBRD | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | | | | | | | | | | | IDA | | 0 | 0 | 0 | 109 | 137 | 148 | 220 | | DISBURSEMENTS | | | | 33 | 70 | 90 | 52 | 89 | | Public and publicly e;uaranteed | | | | 33 | 70 | 90 | 52 | 89 |
Screenshot/link refer: Use FilterXML to split cells as follows: =IFERROR(TRANSPOSE(FILTERXML("<x><y>"&SUBSTITUTE(A2,"|","</y><y>")&"</y></x>","//y")),"")
From column to total up all value and make a new row name 'TOTAL'
This is an original table from csv: +---------------+---------------------+--------------+-----------------+ | Access SSID | Radio Frequencies | User Count | Total Traffic | |---------------+---------------------+--------------+-----------------| | SIS-OPEN | 2.4G | 378 | 144.28 GB | | nan | 5G | 361 | 142.59 GB | | SIS-STAFF | 2.4G | 1 | 32.63 MB | | nan | 5G | 10 | 2.20 GB | | SIS-STUDENT | 2.4G | 88 | 31.64 GB | | nan | 5G | 136 | 37.96 GB | +---------------+---------------------+--------------+-----------------+ My result and I'm tried using this: df.loc['Total'] = df.sum(axis = 0) +---------------+---------------------+--------------+---------------------------------------------------+ | Access SSID | Radio Frequencies | User Count | Total Traffic | |---------------+---------------------+--------------+---------------------------------------------------| | SIS-OPEN | 2.4G | 378 | 144.28 GB | | nan | 5G | 361 | 142.59 GB | | SIS-STAFF | 2.4G | 1 | 32.63 MB | | nan | 5G | 10 | 2.20 GB | | SIS-STUDENT | 2.4G | 88 | 31.64 GB | | nan | 5G | 136 | 37.96 GB | | Total | 2.4G5G2.4G5G2.4G5G | 974 | 144.28 GB142.59 GB32.63 MB2.20 GB31.64 GB37.96 GB | +---------------+---------------------+--------------+---------------------------------------------------+ My expected result should be: +---------------+---------------------+--------------+-----------------+ | Access SSID | Radio Frequencies | User Count | Total Traffic | |---------------+---------------------+--------------+-----------------| | SIS-OPEN | 2.4G | 378 | 144.28 GB | | nan | 5G | 361 | 142.59 GB | | SIS-STAFF | 2.4G | 1 | 32.63 MB | | nan | 5G | 10 | 2.20 GB | | SIS-STUDENT | 2.4G | 88 | 31.64 GB | | nan | 5G | 136 | 37.96 GB | +---------------+---------------------+--------------+-----------------+ | TOTAL | | 974 | | +---------------+---------------------+--------------+-----------------+
Use DataFrame.select_dtypes for only numeric columns: df.loc['Total'] = df.select_dtypes('number').sum(axis = 0) If need also replace missing values to empty strings add DataFrame.reindex: df.loc['Total'] = (df.select_dtypes('number') .sum(axis = 0) .reindex(df.columns, axis=1, fill_value='')) print (df) Radio Frequencies User Count Total Traffic Access SSID SIS-OPEN 2.4G 378 144.28 GB NaN 5G 361 142.59 GB SIS-STAFF 2.4G 1 32.63 MB NaN 5G 10 2.20 GB SIS-STUDENT 2.4G 88 31.64 GB NaN 5G 136 37.96 GB Total 974
Sum rows of data (filtered on a condition) for a range of days using offset, index and match formula in excel
Hi I am trying to come up with a sum that will filter over a condition ("A") and add all the dates in the range (example 6/1/19 - 6/12/19). Using offset, match, and index I get an output that is much higher than than the expected result. Not sure where I am going wrong with this. Attaching a screenshot of the formula, data and the output. Below is the data pasted as text. Screenshot of the data and current formula Here is the current formula: =SUM(OFFSET($B$3,MATCH("A",$B$3:$B$17,0),MATCH($D$1,$B$3:$N$3,0),COUNTIF($B$3:$B$17,"A"),MATCH($D$2,$C$3:$N$3,0)-MATCH($D$1,$C$3:$N$3,0))) Here is the data: | Type | 6/1/2019 | 6/2/2019 | 6/3/2019 | 6/4/2019 | 6/5/2019 | 6/6/2019 | 6/7/2019 | 6/8/2019 | 6/9/2019 | 6/10/2019 | 6/11/2019 | 6/12/2019 | |------|----------|----------|----------|----------|----------|----------|----------|----------|----------|-----------|-----------|-----------| | A | 983 | 950 | 1222 | 1329 | 1254 | 1176 | 1120 | 1018 | 974 | 931 | 989 | 925 | | A | 454 | 483 | 412 | 376 | 366 | 366 | 338 | 414 | 456 | 369 | 390 | 380 | | A | 308 | 361 | 337 | 377 | 361 | 340 | 323 | 361 | 383 | 385 | 350 | 305 | | B | 196 | 190 | 198 | 212 | 173 | 180 | 181 | 185 | 179 | 173 | 21969 | 16945 | | C | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | B | 191 | 250 | 244 | 222 | 197 | 172 | 181 | 155 | 184 | 184 | 168 | 147 | | C | 211 | 233 | 158 | 142 | 128 | 139 | 164 | 189 | 214 | 225 | 162 | 139 | | D | 370 | 403 | 420 | 833 | 1177 | 632 | 432 | 419 | 467 | 438 | 397 | 365 | | A | 202 | 244 | 230 | 263 | 215 | 193 | 178 | 231 | 754 | 514 | 246 | 251 | | B | 229 | 242 | 261 | 281 | 253 | 251 | 235 | 246 | 307 | 274 | 224 | 216 | | C | 262 | 261 | 259 | 212 | 209 | 205 | 211 | 238 | 273 | 223 | 222 | 241 | | A | 370 | 403 | 420 | 833 | 1177 | 632 | 432 | 419 | 467 | 438 | 397 | 365 | | D | 262 | 261 | 259 | 212 | 209 | 205 | 211 | 238 | 273 | 223 | 222 | 241 | | D | 370 | 403 | 420 | 833 | 1177 | 632 | 432 | 419 | 467 | 438 | 397 | 365 |
For a formula approach, I suggest: =SUM(INDEX($C$4:$N$17,AGGREGATE(15,6,1/($B$4:$B$17=C22)*ROW($B$4:$B$17)-3,ROW(INDEX($A:$A,1):INDEX($A:$A,COUNTIF($B$4:$B$17,C22)))),COLUMN(INDEX($1:$1,1):INDEX($1:$1,COUNTA($C$3:$N$3))))) where C22 contains the Type you wish to filter on If you have Excel O365 with the Filter function, you can use: =SUM(FILTER($C$4:$N$17,$B$4:$B$17=C21)) Or you could use a Pivot Table or Power Query
See https://exceljet.net/formula/sum-matching-columns-and-rows Your implementation would be: =SUMPRODUCT(C4:N17*(B4:B17=E1)*(C3:N3>=D1)*(C3:N3<=D2))
nyc returns empty results
I use the nyc library to check test coverage. When I run the tests and they pass successfully, I get an empty table. ----------|----------|----------|----------|----------|-------------------| File | % Stmts | % Branch | % Funcs | % Lines | Uncovered Line #s | ----------|----------|----------|----------|----------|-------------------| All files | 0 | 0 | 0 | 0 | | ----------|----------|----------|----------|----------|-------------------| But when one of the tests fails, the table is filled. File | % Stmts | % Branch | % Funcs | % Lines | Uncovered Line #s | ---------------------------|----------|----------|----------|----------|-------------------| All files | 31.64 | 9.25 | 11.76 | 31.86 | | src | 96.55 | 100 | 66.67 | 96.55 | | index.js | 96.43 | 100 | 66.67 | 96.43 | 44 | src/config | 93.75 | 80 | 100 | 93.75 | | constants.js | 100 | 100 | 100 | 100 | | multer.config.js | 88.89 | 80 | 100 | 88.89 | 13 | s3.config.js | 100 | 100 | 100 | 100 | | src/controllers | 30.5 | 6.49 | 10.68 | 30.77 | | activateProducts.js | 30.34 | 5.56 | 22.22 | 29.89 |... 26,130,131,138 | admins.js | 18.92 | 0 | 0 | 18.92 |... 59,265,271,277 | bookmarks.js | 61.29 | 33.33 | 66.67 | 60 |... 42,43,48,49,56 | categories.js | 15.38 | 0 | 0 | 16 |... 98,205,213,219 | comments.js | 20.83 | 0 | 0 | 21.74 |... 31,132,139,147 | configurations.js | 16.26 | 0 | 0 | 17.39 |... 94,199,200,207 | filters.js | 19.15 | 0 | 0 | 20 |... 77,81,82,86,87 | followers.js | 22.67 | 0 | 0 | 23.61 |... 21,127,128,135 | likes_merchants.js | 31.71 | 0 | 0 | 32.5 |... 58,59,66,67,74 | likes_products.js | 31.71 | 0 | 0 | 32.5 |... 58,59,66,67,74 | locations.js | 15.69 | 0 | 0 | 16.33 |... 26,233,241,247 | locationsType.js | 18.6 | 0 | 0 | 19.51 |... 60,167,175,181 | merchants.js | 34.62 | 0 | 0 | 36 |... 26,27,34,35,42 | merchantsSignup.js | 26.56 | 0 | 0 | 26.56 |... 16,218,219,221 | merchantsVerifyPhone.js | 39.02 | 0 | 0 | 39.02 |... 75,79,84,85,92 | products.js | 47.24 | 20.62 | 52.17 | 46.39 |... 16,422,428,434 | productsSold.js | 51.35 | 20 | 50 | 50 |... 49,54,55,62,70 | saveTokenOfUsers.js | 40.74 | 0 | 0 | 42.31 |... 35,36,41,42,49 | updateLocationOUsers.js | 28.07 | 0 | 0 | 28.57 |... 3,94,98,99,106 | userBookmarks.js | 73.17 | 50 | 50 | 71.79 |... 44,53,54,61,69 | userProducts.js | 81.4 | 50 | 50 | 80.49 |... 42,58,59,66,74 | users.js | 15.91 | 0 | 0 | 16.41 | ---------------------------|----------|----------|----------|----------|-------------------| With what it can be connected? My main test file, where I import the rest of the tests. import './products/allProductsTest'; Product test folder. products: - addProductsTest.js - allProductsTest.js - deleteProductsTest.js - getProductTest.js - listProductsTest.js - updateProductsTest.js
Joining 2 Tables (without Power Query - Macbook, Index/Match too slow) - Potential VBA Option?
I want to join 2 tables. I know I can do it with power query but as I am on Macbook I can't do it, unfortunately. Does anyone have any suggestions? (I would love to try this in VBA would that be possible?) I've created Pivot Tables before using VBA but never joining 2 tables. My goal is to create a Pivot Table from the resulting table (resulting table being after combining Table 1 and Table 2). Table 1 Foreign Keys: Division and Location Division | Year | Week | Location | SchedDept | PlanNetSales | ActNetSales | AreaCategory ----------|------|------|----------|-----------|--------------|-------------|-------------- 5 | 2018 | 10 | 520 | 541 | 1943.2 | 2271.115 | Non-Comm 5 | 2018 | 10 | 520 | 608 | 4378.4 | 5117.255 | Non-Comm 5 | 2018 | 10 | 520 | 1059 | 1044.8 | 1221.11 | Comm 5 | 2018 | 10 | 520 | 1126 | 6308 | 7372.475 | Non-Comm 5 | 2018 | 10 | 520 | 1605 | 1119.2 | 1308.065 | Non-Comm 5 | 2018 | 10 | 520 | 151 | 2995.2 | 3500.64 | Non-Comm 5 | 2018 | 10 | 520 | 1637 | 6371.2 | 7446.34 | Non-Comm 5 | 2018 | 10 | 520 | 3081 | 1203.2 | 1406.24 | Non-Comm 5 | 2018 | 10 | 520 | 6645 | 7350.4 | 8590.78 | Vendor Paid 5 | 2018 | 10 | 520 | 452 | 1676.8 | 1959.76 | Non-Comm 5 | 2018 | 10 | 520 | 527 | 7392 | 8639.4 | Non-Comm 5 | 2018 | 10 | 520 | 542 | 6824.8 | 7976.485 | Non-Comm 5 | 2018 | 10 | 520 | 824 | 1872.8 | 2188.835 | Non-Comm 5 | 2018 | 10 | 520 | 1201 | 6397.6 | 7477.195 | Non-Comm 5 | 2018 | 10 | 520 | 1277 | 2517.6 | 2942.445 | Non-Comm 5 | 2018 | 10 | 520 | 1607 | 2196.8 | 2567.51 | Vendor Paid 5 | 2018 | 10 | 520 | 104 | 3276.8 | 3829.76 | Non-Comm Table 2 Foreign Keys: Division and Location Division | Location | LocationName | Region | RegionName | District | DistrictName ----------|----------|--------------|--------|------------|----------|-------------- 5 | 520 | Location 520 | 1 | Region 1 | 1 | District 1 5 | 584 | Location 584 | 1 | Region 1 | 1 | District 1 5 | 492 | Location 492 | 1 | Region 1 | 2 | District 2 5 | 215 | Location 215 | 1 | Region 1 | 3 | District 3 5 | 649 | Location 649 | 1 | Region 1 | 4 | District 4 5 | 674 | Location 674 | 1 | Region 1 | 1 | District 1 5 | 139 | Location 139 | 1 | Region 1 | 1 | District 1 5 | 539 | Location 539 | 1 | Region 1 | 5 | District 5 5 | 489 | Location 489 | 1 | Region 1 | 5 | District 5 5 | 139 | Location 139 | 1 | Region 1 | 1 | District 1 5 | 161 | Location 161 | 1 | Region 1 | 6 | District 6 5 | 543 | Location 543 | 1 | Region 1 | 4 | District 4 5 | 166 | Location 166 | 1 | Region 1 | 6 | District 6 5 | 71 | Location 71 | 1 | Region 1 | 5 | District 5 5 | 618 | Location 618 | 1 | Region 1 | 5 | District 5 I did it with index match but it is super slow. Here's a screenshot. I tried it with the above and then again with the Table Name and Column Names. =INDEX(LocTable[[#Headers],[Region]], MATCH(MetricsTable[[#Headers],[Division]]&MetricsTable[[#Headers],[Location]],LocTable[[#Headers],[Division]]&LocTable[[#Headers],[Location]],0)) However the above creates a table array "multi-cell array formulas are not allowed in tables". Is the only solution to revert back to nontables so I can run my formula and just deal with the super slowness or is there an option in VBA etc? Thanks in advance!