Pop OS / Dell XPS 9310 -- battery drained overnight on suspend - linux

My laptop is suspending on lid close successfully, but if I don't have it plugged in overnight, the battery is drained by the morning.
I'm including logs from a short suspend I ran just now. I can suspend it overnight and look at the logs afterward, but is there anything immediately suspicious here? I validated that all suspend-related targets are loaded via sudo systemctl status sleep.target suspend.target hibernate.target hybrid-sleep.target
Apr 11 22:09:29 pop-os systemd[1]: Reached target Sleep.
Apr 11 22:09:29 pop-os systemd[1]: Starting Suspend...
Apr 11 22:09:29 pop-os kernel: [ 44.986190] PM: suspend entry (s2idle)
Apr 11 22:09:29 pop-os systemd-sleep[3730]: Suspending system...
Apr 11 22:09:29 pop-os kernel: [ 44.991600] Filesystems sync: 0.005 seconds
Apr 11 22:09:57 pop-os kernel: [ 44.994638] Freezing user space processes ... (elapsed 0.002 seconds) done.
Apr 11 22:09:57 pop-os kernel: [ 44.996920] OOM killer disabled.
Apr 11 22:09:57 pop-os kernel: [ 44.996921] Freezing remaining freezable tasks ... (elapsed 0.001 seconds) done.
Apr 11 22:09:57 pop-os kernel: [ 44.998055] printk: Suspending console(s) (use no_console_suspend to debug)
Apr 11 22:09:57 pop-os kernel: [ 45.315954] psmouse serio1: Failed to disable mouse on isa0060/serio1
Apr 11 22:09:57 pop-os kernel: [ 46.377203] ACPI: EC: interrupt blocked
Apr 11 22:09:57 pop-os kernel: [ 72.605807] ACPI: EC: interrupt unblocked
Apr 11 22:09:57 pop-os kernel: [ 73.107660] pcieport 10000:e0:06.0: can't derive routing for PCI INT A
Apr 11 22:09:57 pop-os kernel: [ 73.107666] nvme 10000:e1:00.0: PCI INT A: no GSI
Apr 11 22:09:57 pop-os kernel: [ 73.114494] nvme nvme0: 8/0/0 default/read/poll queues
Apr 11 22:09:57 pop-os kernel: [ 73.363725] OOM killer enabled.
Apr 11 22:09:57 pop-os kernel: [ 73.363728] Restarting tasks ...
Apr 11 22:09:57 pop-os kernel: [ 73.364154] mei_hdcp 0000:00:16.0-b638ab7e-94e2-4ea2-a552-d1c54b627f04: bound 0000:00:02.0 (ops i915_hdcp_component_ops [i915])
Apr 11 22:09:57 pop-os kernel: [ 73.367166] done.
Apr 11 22:09:57 pop-os touchegg[1000]: libinput error: event0 - Lid Switch: client bug: event processing lagging behind by 1279ms, your system is too slow
Apr 11 22:09:57 pop-os /usr/libexec/gdm-x-session[1823]: (II) modeset(0): EDID vendor "SHP", prod id 5370
Apr 11 22:09:57 pop-os /usr/libexec/gdm-x-session[1823]: (II) modeset(0): Printing DDC gathered Modelines:
Apr 11 22:09:57 pop-os /usr/libexec/gdm-x-session[1823]: (II) modeset(0): Modeline "3840x2400"x0.0 592.50 3840 3888 3920 4000 2400 2403 2409 2469 -hsync -vsync (148.1 kHz eP)
Apr 11 22:09:57 pop-os /usr/libexec/gdm-x-session[1823]: (II) modeset(0): Modeline "3840x2400"x0.0 474.00 3840 3888 3920 4000 2400 2403 2409 2469 -hsync -vsync (118.5 kHz e)
Apr 11 22:09:57 pop-os systemd-sleep[3730]: System resumed.
Apr 11 22:09:57 pop-os bluetoothd[961]: Controller resume with wake event 0x0
Apr 11 22:09:57 pop-os kernel: [ 73.413202] PM: suspend exit
Apr 11 22:09:57 pop-os systemd[1]: systemd-suspend.service: Succeeded.
Apr 11 22:09:57 pop-os systemd[1]: Finished Suspend.
Apr 11 22:09:57 pop-os systemd[1]: Stopped target Sleep.
Apr 11 22:09:57 pop-os systemd[1]: Reached target Suspend.
Apr 11 22:09:57 pop-os systemd[1]: Stopped target Suspend.
Apr 11 22:09:57 pop-os NetworkManager[968]: <info> [1649729397.3461] manager: sleep: wake requested (sleeping: yes enabled: yes)
Apr 11 22:09:57 pop-os NetworkManager[968]: <info> [1649729397.3461] device (wlp113s0): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external')
Apr 11 22:09:57 pop-os ModemManager[1079]: <info> [sleep-monitor] system is resuming
Apr 11 22:09:57 pop-os NetworkManager[968]: <info> [1649729397.4258] manager: NetworkManager state is now DISCONNECTED

The hardware on this system only supports s2idle sleep, and not deep sleep for less energy consumption (details on different sleep states here https://www.kernel.org/doc/Documentation/power/states.txt).
pop-os:$~ sudo cat /sys/power/mem_sleep
[s2idle]
I found this thread: https://www.dell.com/community/XPS/XPS-13-9310-Ubuntu-deep-sleep-missing/td-p/7734008 It suggests changing the disk management from RAID (Dell's default) to AHCI via the Dell BIOS.
So far this has worked for a solution! I've lost only 10% battery overnight, and can go 3 days idling in suspend without a charge.
(Before this, I did try enabling hibernate through these instructions from System76 https://support.system76.com/articles/enable-hibernation/. This does not work great, because the Killer wifi driver does not load on wake from hibernate.)

Suspend ( considering hybrid suspend ), the machine's state is stored in swap space and suspend via RAM (aka sleep) is invoked. This caused for minimal utilisation of power.
Reason to do so : wake up from hibernate is slower than wakeup from sleep. So to ensure system state is not lost, machine's state is stored in swap space and sleep is invoked that uses minimal power and does not shut off the machine. Machine's state is stored in RAM. If battery does not die, wake up happens from RAM which is faster.
Read More : https://wiki.archlinux.org/title/Power_management/Suspend_and_hibernate
In case you want your battery to not die or drain, switch your lid close action from sleep/suspend to hibernate. Hibernate has zero power consumption. Follow the steps mentioned below.
$ grep HandleLidSwitch /etc/systemd/logind.conf
HandleLidSwitch=suspend
If the line is commented, please uncomment by removing "#" and change option to hibernate.
HandleLidSwitch=hibernate
If you are new to Linux, please use gedit command to edit the file.
sudo gedit /etc/systemd/logind.conf

Related

ElasticSearch docker container remains in Exited status

Recently installed Docker, ElasticSearch 7.17.6. docker-compose up -d worked fine
but when trying to bring up the ElasticSearch container, its status remains Exited(1) & can't start the container.
Command to start: sudo docker container start <container-ID>
See below Exception for command: sudo docker logs <Container-ID>
Exception in thread "main" java.nio.file.NoSuchFileException: /usr/share/elasticsearch/config/jvm.options
at java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:92)
at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:106)
at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)
at java.base/sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:218)
at java.base/java.nio.file.Files.newByteChannel(Files.java:380)
at java.base/java.nio.file.Files.newByteChannel(Files.java:432)
at java.base/java.nio.file.spi.FileSystemProvider.newInputStream(FileSystemProvider.java:422)
at java.base/java.nio.file.Files.newInputStream(Files.java:160)
at org.elasticsearch.tools.launchers.JvmOptionsParser.readJvmOptionsFiles(JvmOptionsParser.java:168)
at org.elasticsearch.tools.launchers.JvmOptionsParser.jvmOptions(JvmOptionsParser.java:124)
at org.elasticsearch.tools.launchers.JvmOptionsParser.main(JvmOptionsParser.java:86)
var/log/messages file shows below error:
msg="error detaching from network es_elastic: could not find network attachment for container <Container-ID> to network es_elastic"
Nov 1 13:11:07 ES11 dockerd: time="2022-11-01T13:11:07.567246932-04:00" level=info msg="initialized VXLAN UDP port to 4789 "
Nov 1 13:11:07 ES11 kernel: br0: port 2(<Device_name2>) entered disabled state
Nov 1 13:11:07 ES11 kernel: br0: port 1(<Device_name1>) entered disabled state
Nov 1 13:11:07 ES11 kernel: ov-1-f: renamed from br0
Nov 1 13:11:07 ES11 kernel: device <Device_name2> left promiscuous mode
Nov 1 13:11:07 ES11 kernel: ov-1-f: port 2(<Device_name2>) entered disabled state
Nov 1 13:11:07 ES11 kernel: device <Device_name1> left promiscuous mode
Nov 1 13:11:07 ES11 kernel: ov-1-f: port 1(<Device_name1>) entered disabled state
Nov 1 13:11:07 ES11 kernel: vx-1-f: renamed from <Device_name1>
Nov 1 13:11:07 ES11 kernel: : renamed from <Device_name2>
Nov 1 13:11:07 ES11 avahi-daemon[891]: Withdrawing workstation service for vx-1-f.
Nov 1 13:11:07 ES11 NetworkManager[999]: <info> [ID.7289] manager: (): new Veth device (/org/freedesktop/NetworkManager/Devices/144)
Nov 1 13:11:07 ES11 kernel: : renamed from eth0
Nov 1 13:11:07 ES11 NetworkManager[999]: <info> [ID.7693] manager: (): new Veth device (/org/freedesktop/NetworkManager/Devices/145)
Nov 1 13:11:07 ES11 dockerd: time="2022-11-01T13:11:07.ID-04:00" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object
File jvm.options were missing from the config directory at /u01/es11/config but unsure why the error shows different location and how come it worked in other nodes es12 & es13. But containers started fine after placing these files.

Bluetooth on raspberry 4 without Linux

I'm working on non-Linux OS and now trying to enable bluetooth on Raspberry Pi 4.
I have some necessary drivers such as: gpio, uart (pl011 and mini-uart), mailbox and expgpio through that mailbox.
To enable bluetooth I make some steps:
I configure GPIOs as described in Linux's dts to make UART0 connected
to BT/WiFi chip;
I set BT_ON expgpio to 1 through mailbox (it is made by default, just ensure);
I wrote some command to UART0 and nothing happened =( UART driver return success and reading command answer is always timeouted.
I think I could forget some step for initialization procedure, but as I can see in Linux log there is only firmware downloading and many commands, such as read device name, can be executed prior to it.
May be I forget to enable some clock source or a regulator, but I don't have any idea where start my research.
There is a part of Raspbian kernel log with additional debug info:
Jan 28 05:17:13 raspberrypi kernel: [ 15.321055] Bluetooth: Core ver 2.22
Jan 28 05:17:13 raspberrypi kernel: [ 15.321093] device class 'bluetooth': registering
Jan 28 05:17:13 raspberrypi kernel: [ 15.321149] NET: Registered PF_BLUETOOTH protocol family
Jan 28 05:17:13 raspberrypi kernel: [ 15.321158] Bluetooth: HCI device and connection manager initialized
Jan 28 05:17:13 raspberrypi kernel: [ 15.321176] Bluetooth: HCI socket layer initialized
Jan 28 05:17:13 raspberrypi kernel: [ 15.321189] Bluetooth: L2CAP socket layer initialized
Jan 28 05:17:13 raspberrypi kernel: [ 15.321208] Bluetooth: SCO socket layer initialized
Jan 28 05:17:13 raspberrypi kernel: [ 15.335356] Bluetooth: HCI UART driver ver 2.3
Jan 28 05:17:13 raspberrypi kernel: [ 15.335377] Bluetooth: HCI UART protocol H4 registered at id 0
Jan 28 05:17:13 raspberrypi kernel: [ 15.335387] bus: 'serial': add driver hci_uart_h5
Jan 28 05:17:13 raspberrypi kernel: [ 15.335456] Bluetooth: HCI UART protocol Three-wire (H5) registered at id 2
Jan 28 05:17:13 raspberrypi kernel: [ 15.335480] bus: 'platform': add driver hci_bcm
Jan 28 05:17:13 raspberrypi kernel: [ 15.335641] bus: 'serial': add driver hci_uart_bcm
Jan 28 05:17:13 raspberrypi kernel: [ 15.335679] Bluetooth: HCI UART protocol Broadcom registered at id 7
Jan 28 05:17:13 raspberrypi kernel: [ 15.337922] Bluetooth: TTY name ttyAMA0
Jan 28 05:17:13 raspberrypi kernel: [ 15.338543] Bluetooth: hci_uart_register_dev
Jan 28 05:17:13 raspberrypi kernel: [ 15.338599] device: 'hci0': device_add
Jan 28 05:17:13 raspberrypi kernel: [ 15.345358] device: 'rfkill1': device_add
Jan 28 05:17:13 raspberrypi kernel: [ 15.345497] Bluetooth: HCI UART protocol set. Proto H4; id 0
Jan 28 05:17:13 raspberrypi kernel: [ 15.345530] Bluetooth: hci_uart_open hci0 5d898f04
Jan 28 05:17:13 raspberrypi kernel: [ 15.345543] Bluetooth: hci_uart_setup: START
Jan 28 05:17:13 raspberrypi kernel: [ 15.345550] Bluetooth: hci_uart_setup: init speed = 0
Jan 28 05:17:13 raspberrypi kernel: [ 15.345557] Bluetooth: hci_uart_setup: oper speed = 0
Jan 28 05:17:13 raspberrypi kernel: [ 15.352975] Bluetooth: hci0: type 1 len 3
Jan 28 05:17:13 raspberrypi kernel: [ 15.353010] Bluetooth skb: 00000000: 01 03 10 00
Jan 28 05:17:13 raspberrypi kernel: [ 15.353026] Bluetooth: hci_uart_write_work written 4
Jan 28 05:17:13 raspberrypi kernel: [ 15.353760] Bluetooth: hci0: type 1 len 3
Jan 28 05:17:13 raspberrypi kernel: [ 15.353826] Bluetooth skb: 00000000: 01 01 10 00
....
a lot of lines
....
Jan 28 05:17:13 raspberrypi btuart[479]: bcm43xx_init
Jan 28 05:17:13 raspberrypi btuart[479]: Flash firmware /lib/firmware/brcm/BCM4345C0.hcd
Jan 28 05:17:13 raspberrypi btuart[479]: Set Controller UART speed to 3000000 bit/s
Jan 28 05:17:13 raspberrypi btuart[479]: Device setup complete
Jan 28 05:17:13 raspberrypi systemd[1]: Starting Load/Save RF Kill Switch Status...
Jan 28 05:17:13 raspberrypi systemd[1]: Started Configure Bluetooth Modems connected by UART.
Jan 28 05:17:13 raspberrypi systemd[1]: Reached target Multi-User System.
Jan 28 05:17:13 raspberrypi systemd[1]: Reached target Graphical Interface.
Jan 28 05:17:13 raspberrypi systemd[1]: Starting Update UTMP about System Runlevel Changes...
Jan 28 05:17:13 raspberrypi systemd[625]: Reached target Bluetooth.
Jan 28 05:17:13 raspberrypi systemd[1]: Started Load/Save RF Kill Switch Status.
Jan 28 05:17:13 raspberrypi systemd[1]: Created slice system-bthelper.slice.
Jan 28 05:17:13 raspberrypi systemd[1]: Starting Raspberry Pi bluetooth helper...
Jan 28 05:17:13 raspberrypi systemd[1]: systemd-update-utmp-runlevel.service: Succeeded.
Jan 28 05:17:13 raspberrypi systemd[1]: Finished Update UTMP about System Runlevel Changes.
Jan 28 05:17:13 raspberrypi bthelper[774]: Raspberry Pi BDADDR already set
Jan 28 05:17:13 raspberrypi systemd[1]: Finished Raspberry Pi bluetooth helper.
Jan 28 05:17:13 raspberrypi kernel: [ 15.490868] Bluetooth: hci0: type 1 len 8
Jan 28 05:17:13 raspberrypi kernel: [ 15.490909] Bluetooth skb: 00000000: 01 1c fc 05 01 02 00 01 01
Jan 28 05:17:13 raspberrypi kernel: [ 15.490930] Bluetooth: hci_uart_write_work written 9
Thank you in advance
For H4 protocol UART with Hardware Flow Control must be used. Adding HFC support to PL011 UART driver resolves the problem.

How to know Memory cgroup limit?

We have kubernetes cluster, and we are running jenkins in it. Our jenkins restart after every 48 hours, when we check the kubelet logs for that worker where jenkins deployed, it gives error
Feb 15 14:52:01 myworker kernel: Memory cgroup out of memory: Kill process 110129 (Computer.thread) score 1972 or sacrifice child
Feb 15 14:52:01 myworker kernel: Killed process 50179 (java), UID 1000, total-vm:17378260kB, anon-rss:8371056kB, file-rss:29676kB, shmem-rss:0kB
where 50179 is java process for jenkins.
We set limit in kubernetes for jenkins as 8Gi
resources:
limits:
cpu: 3500m
memory: 8Gi
requests:
cpu: "1"
memory: 4Gi
I also check newrelic alerts, which we integrated with our pods, it never goes beyond 5GB in memory.
Details logs below.
Feb 15 14:52:01 myworker kernel: Download metada cpuset=kubepods-burstable-pod1840326e_dca6_4e8c_a55a_f4fb9a7c95fa.slice:cri-containerd:7fc11e70ccd4fd078b8d243f2710ecc1404955bf52a5cb05eb54f2917086420d mems_allowed=0
Feb 15 14:52:01 myworker kernel: CPU: 6 PID: 115222 Comm: Download metada Kdump: loaded Tainted: G ------------ T 3.10.0-1160.15.2.el7.x86_64 #1
Feb 15 14:52:01 myworker kernel: Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 12/12/2018
Feb 15 14:52:01 myworker kernel: Call Trace:
Feb 15 14:52:01 myworker kernel: [<ffffffff82581fba>] dump_stack+0x19/0x1b
Feb 15 14:52:01 myworker kernel: [<ffffffff8257c8da>] dump_header+0x90/0x229
Feb 15 14:52:01 myworker kernel: [<ffffffff8209d378>] ? ep_poll_callback+0xf8/0x220
Feb 15 14:52:01 myworker kernel: [<ffffffff81fc1d16>] ? find_lock_task_mm+0x56/0xc0
Feb 15 14:52:01 myworker kernel: [<ffffffff8203caa8>] ? try_get_mem_cgroup_from_mm+0x28/0x60
Feb 15 14:52:01 myworker kernel: [<ffffffff81fc227d>] oom_kill_process+0x2cd/0x490
Feb 15 14:52:01 myworker kernel: [<ffffffff82040ebc>] mem_cgroup_oom_synchronize+0x55c/0x590
Feb 15 14:52:01 myworker kernel: [<ffffffff82040320>] ? mem_cgroup_charge_common+0xc0/0xc0
Feb 15 14:52:01 myworker kernel: [<ffffffff81fc2b64>] pagefault_out_of_memory+0x14/0x90
Feb 15 14:52:01 myworker kernel: [<ffffffff8257ade6>] mm_fault_error+0x6a/0x157
Feb 15 14:52:01 myworker kernel: [<ffffffff8258f8d1>] __do_page_fault+0x491/0x500
Feb 15 14:52:01 myworker kernel: [<ffffffff8258f975>] do_page_fault+0x35/0x90
Feb 15 14:52:01 myworker kernel: [<ffffffff8258b778>] page_fault+0x28/0x30
Feb 15 14:52:01 myworker kernel: Task in /system.slice/containerd.service/kubepods-burstable-pod1840326e_dca6_4e8c_a55a_f4fb9a7c95fa.slice:cri-containerd:7fc11e70ccd4fd078b8d243f2710ecc1404955bf52a5cb05eb54f2917086420d killed as a result of limit of /system.slice/containerd.service/kubepods-burstable-pod1840326e_dca6_4e8c_a55a_f4fb9a7c95fa.slice:cri-containerd:7fc11e70ccd4fd078b8d243f2710ecc1404955bf52a5cb05eb54f2917086420d
Feb 15 14:52:01 myworker kernel: memory: usage 8388608kB, limit 8388608kB, failcnt 111634
Feb 15 14:52:01 myworker kernel: memory+swap: usage 8388608kB, limit 9007199254740988kB, failcnt 0
Feb 15 14:52:01 myworker kernel: kmem: usage 0kB, limit 9007199254740988kB, failcnt 0
Feb 15 14:52:01 myworker kernel: Memory cgroup stats for /system.slice/containerd.service/kubepods-burstable-pod1840326e_dca6_4e8c_a55a_f4fb9a7c95fa.slice:cri-containerd:7fc11e70ccd4fd078b8d243f2710ecc1404955bf52a5cb05eb54f2917086420d: cache:20KB rss:8388588KB rss_huge:6144KB mapped_file:0KB swap:0KB inactive_anon:0KB active_anon:8388556KB inactive_file:4KB active_file:0KB unevictable:0KB
Feb 15 14:52:01 myworker kernel: [ pid ] uid tgid total_vm rss nr_ptes swapents oom_score_adj name
[root#myworker log]# head messages -n376428 | tail -n 40
Feb 15 14:52:01 myworker kernel: [115493] 1000 115493 2059 462 8 0 969 git
Feb 15 14:52:01 myworker kernel: [115497] 1000 115497 1764 350 8 0 969 git
Feb 15 14:52:01 myworker kernel: [115498] 1000 115498 24351 2784 17 0 969 git-remote-http
Feb 15 14:52:01 myworker kernel: Memory cgroup out of memory: Kill process 115496 (git fetch --tag) score 1972 or sacrifice child
Feb 15 14:52:01 myworker kernel: Killed process 115493 (git), UID 1000, total-vm:8236kB, anon-rss:296kB, file-rss:1552kB, shmem-rss:0kB
Feb 15 14:52:01 myworker containerd: time="2022-02-15T14:52:01.791126760Z" level=info msg="TaskOOM event &TaskOOM{ContainerID:7fc11e70ccd4fd078b8d243f2710ecc1404955bf52a5cb05eb54f2917086420d,XXX_unrecognized:[],}"
Feb 15 14:52:01 myworker kernel: Download metada invoked oom-killer: gfp_mask=0xd0, order=0, oom_score_adj=969
Feb 15 14:52:01 myworker kernel: Download metada cpuset=kubepods-burstable-pod1840326e_dca6_4e8c_a55a_f4fb9a7c95fa.slice:cri-containerd:7fc11e70ccd4fd078b8d243f2710ecc1404955bf52a5cb05eb54f2917086420d mems_allowed=0
Feb 15 14:52:01 myworker kernel: CPU: 6 PID: 115222 Comm: Download metada Kdump: loaded Tainted: G ------------ T 3.10.0-1160.15.2.el7.x86_64 #1
Feb 15 14:52:01 myworker kernel: Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 12/12/2018
Feb 15 14:52:01 myworker kernel: Call Trace:
Feb 15 14:52:01 myworker kernel: [<ffffffff82581fba>] dump_stack+0x19/0x1b
Feb 15 14:52:01 myworker kernel: [<ffffffff8257c8da>] dump_header+0x90/0x229
Feb 15 14:52:01 myworker kernel: [<ffffffff8209d378>] ? ep_poll_callback+0xf8/0x220
Feb 15 14:52:01 myworker kernel: [<ffffffff81fc1d16>] ? find_lock_task_mm+0x56/0xc0
Feb 15 14:52:01 myworker kernel: [<ffffffff8203caa8>] ? try_get_mem_cgroup_from_mm+0x28/0x60
Feb 15 14:52:01 myworker kernel: [<ffffffff81fc227d>] oom_kill_process+0x2cd/0x490
Feb 15 14:52:01 myworker kernel: [<ffffffff82040ebc>] mem_cgroup_oom_synchronize+0x55c/0x590
Feb 15 14:52:01 myworker kernel: [<ffffffff82040320>] ? mem_cgroup_charge_common+0xc0/0xc0
Feb 15 14:52:01 myworker kernel: [<ffffffff81fc2b64>] pagefault_out_of_memory+0x14/0x90
Feb 15 14:52:01 myworker kernel: [<ffffffff8257ade6>] mm_fault_error+0x6a/0x157
Feb 15 14:52:01 myworker kernel: [<ffffffff8258f8d1>] __do_page_fault+0x491/0x500
Feb 15 14:52:01 myworker kernel: [<ffffffff8258f975>] do_page_fault+0x35/0x90
Feb 15 14:52:01 myworker kernel: [<ffffffff8258b778>] page_fault+0x28/0x30
Feb 15 14:52:01 myworker kernel: Task in /system.slice/containerd.service/kubepods-burstable-pod1840326e_dca6_4e8c_a55a_f4fb9a7c95fa.slice:cri-containerd:7fc11e70ccd4fd078b8d243f2710ecc1404955bf52a5cb05eb54f2917086420d killed as a result of limit of /system.slice/containerd.service/kubepods-burstable-pod1840326e_dca6_4e8c_a55a_f4fb9a7c95fa.slice:cri-containerd:7fc11e70ccd4fd078b8d243f2710ecc1404955bf52a5cb05eb54f2917086420d
Feb 15 14:52:01 myworker kernel: memory: usage 8388608kB, limit 8388608kB, failcnt 111634
Feb 15 14:52:01 myworker kernel: memory+swap: usage 8388608kB, limit 9007199254740988kB, failcnt 0
Feb 15 14:52:01 myworker kernel: kmem: usage 0kB, limit 9007199254740988kB, failcnt 0
Feb 15 14:52:01 myworker kernel: Memory cgroup stats for /system.slice/containerd.service/kubepods-burstable-pod1840326e_dca6_4e8c_a55a_f4fb9a7c95fa.slice:cri-containerd:7fc11e70ccd4fd078b8d243f2710ecc1404955bf52a5cb05eb54f2917086420d: cache:20KB rss:8388588KB rss_huge:6144KB mapped_file:0KB swap:0KB inactive_anon:0KB active_anon:8388556KB inactive_file:4KB active_file:0KB unevictable:0KB
Feb 15 14:52:01 myworker kernel: [ pid ] uid tgid total_vm rss nr_ptes swapents oom_score_adj name
Feb 15 14:52:01 myworker kernel: [41710] 1000 41710 285 1 4 0 969 tini
Feb 15 14:52:01 myworker kernel: [50179] 1000 50179 4344565 2100159 4662 0 969 java
Feb 15 14:52:01 myworker kernel: [115497] 1000 115497 1764 350 8 0 969 git
Feb 15 14:52:01 myworker kernel: [115498] 1000 115498 24351 2784 17 0 969 git-remote-http
Feb 15 14:52:01 myworker kernel: Memory cgroup out of memory: Kill process 110129 (Computer.thread) score 1972 or sacrifice child
Feb 15 14:52:01 myworker kernel: Killed process 50179 (java), UID 1000, total-vm:17378260kB, anon-rss:8371056kB, file-rss:29676kB, shmem-rss:0kB
Feb 15 14:52:03 myworker containerd: time="2022-02-15T14:52:03.132654815Z" level=info msg="Finish piping stderr of container \"7fc11e70ccd4fd078b8d243f2710ecc1404955bf52a5cb05eb54f2917086420d\""
Feb 15 14:52:03 myworker containerd: time="2022-02-15T14:52:03.132676088Z" level=info msg="Finish piping stdout of container \"7fc11e70ccd4fd078b8d243f2710ecc1404955bf52a5cb05eb54f2917086420d\""
Feb 15 14:52:03 myworker containerd: time="2022-02-15T14:52:03.134738144Z" level=info msg="TaskExit event &TaskExit{ContainerID:7fc11e70ccd4fd078b8d243f2710ecc1404955bf52a5cb05eb54f2917086420d,ID:7fc11e70ccd4fd078b8d243f2710ecc1404955bf52a5cb05eb54f2917086420d,Pid:41710,ExitStatus:137,ExitedAt:2022-02-15 14:52:03.134458495 +0000 UTC,XXX_unrecognized:[],}"
Feb 15 14:52:03 myworker containerd: time="2022-02-15T14:52:03.248040140Z" level=info msg="shim disconnected" id=7fc11e70ccd4fd078b8d243f2710ecc1404955bf52a5cb05eb54f2917086420d
Only problem I can see here is, we are telling kubernetes to go till 8Gb but Memory cgroup might have limit below 8Gb and when it try to reach something beyond 5Gb it kill the pod and it restart again.
What is the best way to know the Memory cgroup limit? and is there way to know which pods/process are using this cgroup?
Questions:
Q1: What kind of cluster do you use? Minikube, kubeadm or managed by cloud GKE, EKS, AKS?
A1: kubeadm
Q2: Which version of kubernetes do you use?
A2: v1.21.3
Q3: From when the problem with restart jenkins pod has been started?
A3: Issue might be from the beginning, but we start noticing recently when we moved more jobs to kubernetes based jenkins.
Q4: Could you paste an output from jenkins pods using kubectl describe pod ?
A4:
# kubectl describe pod -n jenkins jenkins-jenkins-instance
Name: jenkins-jenkins-instance
Namespace: jenkins
Priority: 0
Node: myworker/192.168.X.X
Start Time: Sun, 13 Mar 2022 15:12:19 +0000
Labels: app=jenkins-operator
jenkins-cr=jenkins-instance
Annotations: <none>
Status: Running
IP: 192.168.113.152
IPs:
IP: 192.168.113.152
Controlled By: Jenkins/jenkins-instance
Containers:
jenkins-master:
Container ID: containerd://70e68b7b069404f825b53e9d8f0dac22c595074e5bdc4659cae5248e25af8e00
Image: jenkins/jenkins:lts
Image ID: docker.io/jenkins/jenkins#sha256:b414f82151b865d3efd49ec27a944f624188d09fec58700cddfbe6bae2450f77
Ports: 8080/TCP, 50000/TCP
Host Ports: 0/TCP, 0/TCP
Command:
bash
-c
/var/jenkins/scripts/init.sh && exec /sbin/tini -s -- /usr/local/bin/jenkins.sh
State: Running
Started: Sun, 13 Mar 2022 15:12:20 +0000
Ready: True
Restart Count: 0
Limits:
cpu: 3500m
memory: 8Gi
Requests:
cpu: 1
memory: 4Gi
Liveness: http-get http://:http/login delay=100s timeout=5s period=10s #success=1 #failure=12
Readiness: http-get http://:http/login delay=80s timeout=1s period=10s #success=1 #failure=10
Environment:
COPY_REFERENCE_FILE_LOG: /var/lib/jenkins/copy_reference_file.log
NEW_RELIC_METADATA_KUBERNETES_CLUSTER_NAME: IAD.Prod
NEW_RELIC_METADATA_KUBERNETES_NODE_NAME: (v1:spec.nodeName)
NEW_RELIC_METADATA_KUBERNETES_NAMESPACE_NAME: jenkins (v1:metadata.namespace)
NEW_RELIC_METADATA_KUBERNETES_POD_NAME: jenkins-jenkins-instance (v1:metadata.name)
NEW_RELIC_METADATA_KUBERNETES_CONTAINER_NAME: master
NEW_RELIC_METADATA_KUBERNETES_CONTAINER_IMAGE_NAME: jenkins/jenkins:lts
JAVA_OPTS: -XX:MinRAMPercentage=50.0 -XX:MaxRAMPercentage=80.0 -Djenkins.install.runSetupWizard=false -Djava.awt.headless=true
JENKINS_HOME: /var/lib/jenkins
Mounts:
/var/jenkins/init-configuration from init-configuration (ro)
/var/jenkins/operator-credentials from operator-credentials (ro)
/var/jenkins/scripts from scripts (ro)
/var/lib/jenkins from jenkins-home (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fc57k (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
jenkins-home:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
scripts:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: jenkins-operator-scripts-jenkins-instance
Optional: false
init-configuration:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: jenkins-operator-init-configuration-jenkins-instance
Optional: false
operator-credentials:
Type: Secret (a volume populated by a Secret)
SecretName: jenkins-operator-credentials-jenkins-instance
Optional: false
kube-api-access-fc57k:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
Q5: How we deploy jenkins?
A5: We are using Jenkins-operator to deploy jenkins.

kfd kfd: STONEY not supported in kfd

I been getting this error on my manjaro linux machine, here is some more info:
- Journal begins at Mon 2021-03-08 18:37:49 EET, ends at Tue 2021-03-09 16:21:19 EET. --
Mar 09 11:02:26 manjaro kernel: tpm_crb MSFT0101:00: can't request region for resource [mem 0xcfbb6000-0xcfbb9fff]
Mar 09 11:02:29 manjaro kernel: kfd kfd: STONEY not supported in kfd
Mar 09 11:02:32 manjaro systemd-backlight[1332]: Failed to get backlight or LED device 'backlight:acpi_video0': No such device
Mar 09 11:02:32 manjaro systemd[1]: Failed to start Load/Save Screen Backlight Brightness of backlight:acpi_video0.
Subject: A start job for unit systemd-backlight#backlight:acpi_video0.service has failed
Defined-By: systemd
Support: https://forum.manjaro.org/c/support
A start job for unit systemd-backlight#backlight:acpi_video0.service has finished with a failure.
The job identifier is 1354 and the job result is failed.
Mar 09 11:02:32 manjaro systemd-backlight[1333]: Failed to get backlight or LED device 'backlight:acpi_video1': No such device
Mar 09 11:02:32 manjaro systemd[1]: Failed to start Load/Save Screen Backlight Brightness of backlight:acpi_video1.
Subject: A start job for unit systemd-backlight#backlight:acpi_video1.service has failed
Defined-By: systemd
Support: https://forum.manjaro.org/c/support
A start job for unit systemd-backlight#backlight:acpi_video1.service has finished with a failure.
The job identifier is 1360 and the job result is failed.
I don't know if the kfd error it's happening because of the first error.
I would like to know what it actually means, where is it coming from, and how can I fix it?
And maybe a word on the systemd-backlight#backlight:acpi_video1.service error.
The setup i have:
Cpu:
AMD A9-9420 RADEON R5, 5 COMPUTE CORES 2C+3G, 2586 MHz
GPU:
ATI Stoney [Radeon R2/R3/R4/R5 Graphics]
4GB RAM, 250GB SSD
OS: Linux manjaro 5.9.16-1-MANJARO #1 SMP PREEMPT Mon Dec 21 22:00:46 UTC 2020 x86_64 GNU/Linux

Kafka broker crash every day - OOM killer

I have a cluster of 3 kafka brokers Version 0.10.2.1. Each broker has it's own host 2 cpu / 16G RAM, In addition we are using docker to wrap the broker process.
The problems is as follows:
Almost every day at the same time we see all of our kafka clients failed for 10 minutes.
At the beginning I thought it is related to Kafka No broker in ISR for partition
But after a while I discover that the broker just crash due to OOM-killer.
I also played with the Xmx and Xms before I discover that it is the OOM-killer. I had:
-Xmx2048M -Xms2048M
-Xmx4096M -Xms2048M
Same behavior for both
In addition currently we don't have ulimit
>> ulimit
unlimited
less kern.log
LOGS:
Jan 23 06:25:16 kafka10-172-40-103-177 kernel: [16504862.761019] run-parts invoked oom-killer: gfp_mask=0x26000c0, order=2, oom_score_adj=0
Jan 23 06:25:16 kafka10-172-40-103-177 kernel: [16504862.761022] run-parts cpuset=/ mems_allowed=0
Jan 23 06:25:16 kafka10-172-40-103-177 kernel: [16504862.761026] CPU: 1 PID: 12266 Comm: run-parts Not tainted 4.4.0-59-generic #80-Ubuntu
Jan 23 06:25:16 kafka10-172-40-103-177 kernel: [16504862.761027] Hardware name: Xen HVM domU, BIOS 4.2.amazon 02/16/2017
Jan 23 06:25:16 kafka10-172-40-103-177 kernel: [16504862.761029] 0000000000000286 000000004811d7da ffff880036967af0 ffffffff813f7583
Jan 23 06:25:16 kafka10-172-40-103-177 kernel: [16504862.761031] ffff880036967cc8 ffff880439f2f000 ffff880036967b60 ffffffff8120ad5e
Jan 23 06:25:16 kafka10-172-40-103-177 kernel: [16504862.761033] ffffffff81cd2dc7 0000000000000000 ffffffff81e67760 0000000000000206
Jan 23 06:25:16 kafka10-172-40-103-177 kernel: [16504862.761036] Call Trace:
Jan 23 06:25:16 kafka10-172-40-103-177 kernel: [16504862.761041] [<ffffffff813f7583>] dump_stack+0x63/0x90
Jan 23 06:25:16 kafka10-172-40-103-177 kernel: [16504862.761044] [<ffffffff8120ad5e>] dump_header+0x5a/0x1c5
Jan 23 06:25:16 kafka10-172-40-103-177 kernel: [16504862.761048] [<ffffffff81192722>] oom_kill_process+0x202/0x3c0
Jan 23 06:25:16 kafka10-172-40-103-177 kernel: [16504862.761049] [<ffffffff81192b49>] out_of_memory+0x219/0x460
Jan 23 06:25:16 kafka10-172-40-103-177 kernel: [16504862.761052] [<ffffffff81198abd>] __alloc_pages_slowpath.constprop.88+0x8fd/0xa70
Jan 23 06:25:16 kafka10-172-40-103-177 kernel: [16504862.761054] [<ffffffff81198eb6>] __alloc_pages_nodemask+0x286/0x2a0
Jan 23 06:25:16 kafka10-172-40-103-177 kernel: [16504862.761056] [<ffffffff81198f6b>] alloc_kmem_pages_node+0x4b/0xc0
Jan 23 06:25:16 kafka10-172-40-103-177 kernel: [16504862.761060] [<ffffffff8107ea5e>] copy_process+0x1be/0x1b70
Jan 23 06:25:16 kafka10-172-40-103-177 kernel: [16504862.761063] [<ffffffff81391bcc>] ? apparmor_file_alloc_security+0x5c/0x220
Jan 23 06:25:16 kafka10-172-40-103-177 kernel: [16504862.761066] [<ffffffff811ed05a>] ? kmem_cache_alloc+0x1ca/0x1f0
Jan 23 06:25:16 kafka10-172-40-103-177 kernel: [16504862.761070] [<ffffffff81347bd3>] ? security_file_alloc+0x33/0x50
Jan 23 06:25:16 kafka10-172-40-103-177 kernel: [16504862.761073] [<ffffffff810caf11>] ? __raw_callee_save___pv_queued_spin_unlock+0x11/0x20
Jan 23 06:25:16 kafka10-172-40-103-177 kernel: [16504862.761074] [<ffffffff810805a0>] _do_fork+0x80/0x360
Jan 23 06:25:16 kafka10-172-40-103-177 kernel: [16504862.761076] [<ffffffff81080929>] SyS_clone+0x19/0x20
Jan 23 06:25:16 kafka10-172-40-103-177 kernel: [16504862.761080] [<ffffffff818384f2>] entry_SYSCALL_64_fastpath+0x16/0x71
And ....
Jan 24 06:25:25 kafka10-172-40-103-177 kernel: [16591270.954463] Out of memory: Kill process 16123 (java) score 134 or sacrifice child
Jan 24 06:25:25 kafka10-172-40-103-177 kernel: [16591270.958609] Killed process 16123 (java) total-vm:11977548kB, anon-rss:2035780kB, file-rss:67848kB
Any suggestion of how to approach this ??
We found the problem.
First I will say that adding more RAM to the machine also solved the problem but it is "expensive solution".
The problem was as follows:
Since I was working with EC2 ubuntu distribution I got daily crontabs in all of my cluster exactly at the same time. One of the scripts was mlocate this script apparently took too many resources.
I assume that since all cluster of kafka has some issues with IO and Memory, brokers was trying to use more memory and then the OOM killer killed them.
When 2 of my 3 brokers were down some services were down.
So the solution was:
Change the crontab to work in different hours of the day in each
broker.
Disable mlocate
I also faced the same issue below mentioned blog helped me out :
https://docs.confluent.io/current/kafka/deployment.html
How to decide Kafka Cluster size
https://community.hortonworks.com/articles/80813/kafka-best-practices-1.html
And please make sure that the swap is enabled on all the brokers.

Resources