DB2 docker Shared memory segments cannot be allocated - linux

I am trying to setup a docker image with a DB2 database.
The installation is completed without any problems, but I get the following error when I try to restart the database:
SQL1084C Shared memory segments cannot be allocated. SQLSTATE=57019
I based the Dockerfile on this one:
https://github.com/jeffbonhag/db2-docker
where he states the same problem should be addressed by adding the command
sysctl kernel.shmmax=18446744073692774399
to allow the kernel to allocate more memory but the error persists.
The docker daemon itself runs in Ubuntu 14.04 which runs inside Parallels on MacOSX.
EDIT: After some search I found out that this is related to the following command:
UPDATE DB CFG FOR S0MXAT01 USING locklist 100000;

You are over-allocating the database memory heap, i.e. docker is unable to satisfy the memory requirements. Have a look at the following link to the manuals. This will give you a breakdown of what is located in the database memory:
Bufferpools
The database heap
The locklist
The utility heap
The package cache
The catalog cache
The shared sort heap, if it is enabled
A 20% overflow area
You can fiddle around with (decrease) any of this heaps until docker is happy.

In case others run into this - If you're rolling your own container and leave memory set at automatic, it may try to allocate all the memory on the host to Db2, leading to this error. Sometimes the initial start works out ok, but you end up with odd crashes weeks or months down the line.
The "official" db2 container (the developer community edition one) handles this. If you're building your own container, you'll likely need to set DATABASE_MEMORY and/or INSTANCE_MEMORY to reasonable limits based on the size of your container and restart Db2 in the container. This can be done in your entrypoint script.

Related

does docker manage filesystem like a standalone OS?

I have a program I'm running in a docker container. After 10-12 hours of run, the program terminated with filesystem-related errors (FileNotFoundError, or similar).
I'm wondering if the disk space got filled up or a similar filesystem-related issue or there was a problem in my code (e.g one process deleted the file pre-maturely).
I don't know much about docker management of files and wonder if inside docker it creates and manages its own FS or not. Here are three possibilities I'm considering and mainly wonder if #1 could be the case or not.
If docker manages it's own filesystem, could it be that although disk space is available on the host machine, docker container ran out of it's own storage space? (I've seen similar issues regarding running out of memory for a process that has limited memory artificially imposed using cgroups)
Could it be that host filesystem ran out of space and the files got corrupted or maybe didn't get written correctly?
There is some bug in my code.
This is likely a bug in your code. Most programs print the error they encounter, and when a program encounters out-of-space, the error returned by the filesystem is: "No space left on device" (errno 28 ENOSPC).
If you see FileNotFoundError, that means the file is missing. My best theory is that it's coming from your consumer process.
It's still possible though, that the file doesn't exist because the producer ran out of space and you didn't handle the error correctly - you'll need to check your logs.
It might also be a race condition, depending on your application. There's really not enough details to answer that.
As to the title question:
By default, docker just overlay-mounts an empty directory from the host's filesystem into the container, so the amount of free space on the container is the same as the amount on the host.
If you're using volumes, that depends on the storage driver you use. As #Dan Serbyn mentioned, the default limit for the devicemapper driver is 10 GB. The overlay2 driver - the default driver - doesn't have that limitation.
In the current Docker version, there is a default limitation on the Docker container storage of 10 GB.
You can check the disk space that containers are using by running the following command:
docker system df
It's also possible that the file your container is trying to access has access level restrictions. Try to make it available for docker or maybe everybody (chmod 777 file.txt).

How can I change memory dedicated to Docker in Linux? [duplicate]

This question already has answers here:
How to increase/check default memory Docker has on Linux?
(2 answers)
Closed 1 year ago.
I'm trying to run the confluent cp-demo docker image.
https://docs.confluent.io/5.5.0/tutorials/cp-demo/docs/index.html
I'm using Ubuntu 20-04 and in order to start the container I need to increase docker max memory setting from the default 2GB to 8GB.
This can be done easily on Windows and Mac by the Docker Desktop app but that isn't available on Ubuntu and I haven't found a way to modify it using the cli. (I can only modify the memory of a container after I started it with the cli,but in order to start cp-demo it says I need to change the memory setting first).
Does anyone know how can I do this?
As far I know, in mac and windows you can control memory and cpu for docker application but in Linux it directly usages linux namespaces and cgroups so you don't need to do any changes for daemon, docker will provide free resources available on host to containers.
For your requirements:.
Make sure host-machine have sufficient free resources available.
Start cp-demo with memory limit:. docker run -it --memory=ā€2gā€ cp-demo

Docker warning on cgroup swap limit, memory.use_hierarchy

I am getting this warning from 'sudo docker -d':
WARNING: Your kernel does not support cgroup swap limit.
even after following the steps (as in this link):
modify below lines in /etc/default/grub (I did both for good measure)
RUB_CMDLINE_LINUX_DEFUALT="cgroup_enable=memory swapaccount=1"
GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1"
and then update-grub/reboot via
sudo update-grub; sudo reboot
My questions are:
1) Should I be worried about this warning?
I think I should be because I am trying to use docker containers in a use case where enforcing memory limits is important.
2) Is it a good idea to change the memory use_hierarchy setting? -- or -- What is the best way to fix this?
I see this warning in 'dmesg'. I am not sure if it is a good idea to try to change the use_hierarchy setting to '1' (nor how exactly to do this)
cgroup: "memory" requires setting use_hierarchy to 1 on the root."
Or, is there some better way to fix this? I'm just firing wild shots here, perhaps a kernel upgrade would help? I see some 3.16 kernel upgrades are possible.
Environment:
I am running Ubuntu 14.04 x64 (kernel: 3.13.0-43-generic x86_64) with docker version 1.0.1
Other notes:
I have read other online help articles about similar docker/cgroup errors that say installing apparmor_parser fixes it. However, on my system, apparmor is installed and appears to be started up just fine (per dmesg). Also, this file exists: /sbin/apparmor_parser
Also, I'm rather new to admin tasks on linux servers.
cgroup swap limit is important if you are using swap and want to enforce memory limit that includes both memory and swap. I have m/c without swap, so I never enabled it.
use_hierarchy is useful if you want reported memory usage to include memory reported by all subcgroups. eg with use_hierarchy=1, /sys/fs/cgroup/memory/parent will report memory used by processes under that cgroup and also of any subcgroups (like /sys/fs/cgroup/memory/parent/child). This is always a useful setting to enable. But its not enabled by default on most OS.
In summary, your docker containers will work fine without both of these settings. Enabling these gives you some extra benefit - esp. if you care about limit swap use and getting accurate memory reporting.

How to perform memory dump to docker container from outside

I'm trying to find a way to perform a memory dump on a docker container in order to perform memory forensics (to detect malware exploits for example).
I would like to be able to perform the same methods I use on a virtual machine. The problem is that docker containers (and any kind of linux containers) use memory in a different way - containers share resources, use namespaces and cgroups...
I'd like to program a tool that performs this but am a bit lost as to where to begin.
How would one approach this problem?
Thanks in advance!
These days you can use the experimental Docker feature checkpoint and restore: https://github.com/boucher/docker/blob/cr-combined/experimental/checkpoint_restore.md.
There is a howto available at https://criu.org/Docker.

What is the benefit of Docker container for a memcached instance?

One of the Docker examples is for a container with Memcached configured. I'm wondering why one would want this versus a VM configured with Memcached? I'm guessing that it would make no sense to have more than one memcached docker container running under the same host, and that the only real advantage is speed advantage of "spinning up" the memcached stack in a docker container vs Memcached via a VM. Is this correct?
Also, how does one set the memory to be used by memcached in the docker container? How would this work if there were two or more docker containers with Memcached under one host? (I'm assuming again that two or more would not make sense).
I'm wondering why one would want this versus a VM configured with Memcached?
Security: If someone breaks memcached and trojans the filesystem, it doesn't matter -- the file system gets thrown away when you start a new memchached.
Isolation: You can hard-limit each container to prevent it from using too much RAM.
Standardization: Currently, each app/database/cache/load balancer must record what to install, what to configure and what to run. There is no standard (and no lack of tools such as puppet, chef, etc.). But these tools are very complex, not really OS independent (despite their claims), and carry the same complexity from development to deployment.
With docker, everything is just a container started with run BLAH. If your app has 5 layers, you just have 5 containers to run, with a tiny bit of orchestration on top. Developers never need to "look into the container" unless they are developing at that layer.
Resources: You can spin up 1000's of docker containers on an ordinary PC, but you would have trouble spinning up 100's of VMs. The limit is both CPU and RAM. Docker containers are just processes in an "enhanced" chroot. On a VM, there are dozens of background processes (cron, logrotation, syslog, etc), but there are no extra processes for docker.
I'm guessing that it would make no sense to have more than one memcached docker container running under the same host
It depends. There are cases where you want to split up your RAM into parcels instead of globally. (i.e. imagine if you want to devote 20% of your cache to caching users, and 40% of your cache to caching files, etc.)
Also, most sharding schemes are hard to expand, so people often start with many 'virtual' shards, then expand on to physical boxes when needed. So you might start with your app knowing about 20 memcached instances (chosen based on object ID). At first, all 20 run on one physical server. But later you split them onto 2 servers (10/10), then later onto 5 servers (4/4/4/4) and finally onto 20 physical servers (1 memcached each). Thus, you can scale your app 20x just by moving VMs around and not changing your app.
the only real advantage is speed advantage of "spinning up" the memcached stack in a docker container vs Memcached via a VM. Is this correct?
No, that's just a slight side benefit. see above.
Also, how does one set the memory to be used by memcached in the docker container?
In the docker run command, just use -m.
How would this work if there were two or more docker containers with Memcached under one host? (I'm assuming again that two or more would not make sense).
Same way. If you didn't set a memory limit, it would be exactly like running 2 memcached processes on the host. (If one fills up the memory, both will get out of memory errors.)
There seems to be two questions here...
1 - The benefit is as you describe. You can sandbox the memcached instance (and configuration) in to separate containers so you could run multiple on a given host. In addition, moving the memcached instance to another host is pretty trivial and just requires an update to application configuration in the worst case.
2 - docker run -m <inbytes> <memcached-image> would limit the amount of memory a memcached container could consume. You can run as many of these as you want under a single host.
I might be missing something here, but Memcaching only says something about memory usage, right? Docker containers are very efficient in disk space usage as well. You don't need an OS on every VM, but you can share resources. Insightful expanation with pictures on the docker.io website.

Resources