I am getting this warning from 'sudo docker -d':
WARNING: Your kernel does not support cgroup swap limit.
even after following the steps (as in this link):
modify below lines in /etc/default/grub (I did both for good measure)
RUB_CMDLINE_LINUX_DEFUALT="cgroup_enable=memory swapaccount=1"
GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1"
and then update-grub/reboot via
sudo update-grub; sudo reboot
My questions are:
1) Should I be worried about this warning?
I think I should be because I am trying to use docker containers in a use case where enforcing memory limits is important.
2) Is it a good idea to change the memory use_hierarchy setting? -- or -- What is the best way to fix this?
I see this warning in 'dmesg'. I am not sure if it is a good idea to try to change the use_hierarchy setting to '1' (nor how exactly to do this)
cgroup: "memory" requires setting use_hierarchy to 1 on the root."
Or, is there some better way to fix this? I'm just firing wild shots here, perhaps a kernel upgrade would help? I see some 3.16 kernel upgrades are possible.
Environment:
I am running Ubuntu 14.04 x64 (kernel: 3.13.0-43-generic x86_64) with docker version 1.0.1
Other notes:
I have read other online help articles about similar docker/cgroup errors that say installing apparmor_parser fixes it. However, on my system, apparmor is installed and appears to be started up just fine (per dmesg). Also, this file exists: /sbin/apparmor_parser
Also, I'm rather new to admin tasks on linux servers.
cgroup swap limit is important if you are using swap and want to enforce memory limit that includes both memory and swap. I have m/c without swap, so I never enabled it.
use_hierarchy is useful if you want reported memory usage to include memory reported by all subcgroups. eg with use_hierarchy=1, /sys/fs/cgroup/memory/parent will report memory used by processes under that cgroup and also of any subcgroups (like /sys/fs/cgroup/memory/parent/child). This is always a useful setting to enable. But its not enabled by default on most OS.
In summary, your docker containers will work fine without both of these settings. Enabling these gives you some extra benefit - esp. if you care about limit swap use and getting accurate memory reporting.
Related
I have a program I'm running in a docker container. After 10-12 hours of run, the program terminated with filesystem-related errors (FileNotFoundError, or similar).
I'm wondering if the disk space got filled up or a similar filesystem-related issue or there was a problem in my code (e.g one process deleted the file pre-maturely).
I don't know much about docker management of files and wonder if inside docker it creates and manages its own FS or not. Here are three possibilities I'm considering and mainly wonder if #1 could be the case or not.
If docker manages it's own filesystem, could it be that although disk space is available on the host machine, docker container ran out of it's own storage space? (I've seen similar issues regarding running out of memory for a process that has limited memory artificially imposed using cgroups)
Could it be that host filesystem ran out of space and the files got corrupted or maybe didn't get written correctly?
There is some bug in my code.
This is likely a bug in your code. Most programs print the error they encounter, and when a program encounters out-of-space, the error returned by the filesystem is: "No space left on device" (errno 28 ENOSPC).
If you see FileNotFoundError, that means the file is missing. My best theory is that it's coming from your consumer process.
It's still possible though, that the file doesn't exist because the producer ran out of space and you didn't handle the error correctly - you'll need to check your logs.
It might also be a race condition, depending on your application. There's really not enough details to answer that.
As to the title question:
By default, docker just overlay-mounts an empty directory from the host's filesystem into the container, so the amount of free space on the container is the same as the amount on the host.
If you're using volumes, that depends on the storage driver you use. As #Dan Serbyn mentioned, the default limit for the devicemapper driver is 10 GB. The overlay2 driver - the default driver - doesn't have that limitation.
In the current Docker version, there is a default limitation on the Docker container storage of 10 GB.
You can check the disk space that containers are using by running the following command:
docker system df
It's also possible that the file your container is trying to access has access level restrictions. Try to make it available for docker or maybe everybody (chmod 777 file.txt).
I am trying to setup a docker image with a DB2 database.
The installation is completed without any problems, but I get the following error when I try to restart the database:
SQL1084C Shared memory segments cannot be allocated. SQLSTATE=57019
I based the Dockerfile on this one:
https://github.com/jeffbonhag/db2-docker
where he states the same problem should be addressed by adding the command
sysctl kernel.shmmax=18446744073692774399
to allow the kernel to allocate more memory but the error persists.
The docker daemon itself runs in Ubuntu 14.04 which runs inside Parallels on MacOSX.
EDIT: After some search I found out that this is related to the following command:
UPDATE DB CFG FOR S0MXAT01 USING locklist 100000;
You are over-allocating the database memory heap, i.e. docker is unable to satisfy the memory requirements. Have a look at the following link to the manuals. This will give you a breakdown of what is located in the database memory:
Bufferpools
The database heap
The locklist
The utility heap
The package cache
The catalog cache
The shared sort heap, if it is enabled
A 20% overflow area
You can fiddle around with (decrease) any of this heaps until docker is happy.
In case others run into this - If you're rolling your own container and leave memory set at automatic, it may try to allocate all the memory on the host to Db2, leading to this error. Sometimes the initial start works out ok, but you end up with odd crashes weeks or months down the line.
The "official" db2 container (the developer community edition one) handles this. If you're building your own container, you'll likely need to set DATABASE_MEMORY and/or INSTANCE_MEMORY to reasonable limits based on the size of your container and restart Db2 in the container. This can be done in your entrypoint script.
Im having trouble configuring shorewall on my linode instance.
just thought maybe you know of an issue, perhaps related to your Xen virtualization and running shorewall on it...
when attempting to start shorewall I get the following error:
"ERROR: UNTRACKED state requires Raw Table in your kernel and iptables"
any ideas would be appreciated
thanks
Ideally the kernel should have CONFIG_IP_NF_RAW (and CONFIG_IP6_NF_RAW for IPv6) enabled, which provides support for the missing "Raw Table" mentioned in the error.
A link to an (unmaintained) page for kernel configuration options with Shorewall can be found here:
http://shorewall.net/kernel.htm
However, if you are unable to update the kernel, you may be able to work around the issue by editing the shorewall.conf (or shorewall6.conf) file, and changing the following line:
BLACKLIST="NEW,INVALID,UNTRACKED"
to:
BLACKLIST="NEW,INVALID"
This would, obviously, reduce some of the effectiveness of the firewall, hence ideally the kernel should be updated instead.
I installed Jenkins on my vserver. When I had a look at htop Jenkins was running with 30 thread, each was allowed to allocate 247MB memory and up to 1181MB virtual memory.
Because I've only a small vserver I tried to change the number of threads. But I could not find any configuration file.
I installed jenkins via aptitude install jenkins and in htop I can see that Jenkins is running from: /usr/bin/java -jar /usr/share/jenkins/jenkins.war
Tomcat isn't installed as well as jetty isn't installed.
Where is the information about the number of threads saved? Or how can I reduce the number of threads for Jenkins?
http://winstone.sourceforge.net/#commandLine is the official command-line reference - but as I mention in the comment --handlerCountMax (or --handlerCountStartup, for that matter) did not seem to work for me. Try it yourself (here's how to run Jenkins in stand-alone mode).
You may want to try to reduce the number of executors as well as disabling plugins you do not need and see what happens.
Please keep in mind, however, that if you plan to continue using Jenkins seriously, you should plan for more resources, not less: as the number of your jobs grows, so will the resource utilization.
You can also cut down the number of executors you have in your node. This may or may not help - it may be that Jenkins is smart enough to kill the thread when it isn't using an executor. Still, some more information would be useful: How many jobs do you have? What plugins are installed? With more details I could give better advice.
I want to make a web service that runs other people's code locally. Naturally, I want to limit their code's access to a certain "sandbox" directory, so that they won't be able to connect to other parts of my server (DB, main webserver, etc.)
What's the best way to do this?
Run VMware/Virtualbox:
+ I guess it's as secure as it gets. Even if someone manage to "hack", they only hack the guest machine
+ Can limit the CPU & memory the processes use
+ Easy to set up - just create the VM
- Harder to "connect" the sandbox directory from the host to the guest
- Wasting extra memory and CPU for managing the VM
Run underprivileged user:
+ Doesn't waste extra resources
+ Sandbox directory is just a plain directory
? Can't limit CPU and memory?
? I don't know if it's secure enough
Any other way?
Server running Fedora Core 8, the "other" codes written in Java & C++
To limit CPU and memory, you want to set limits for groups of processes (POSIX resource limits only apply to individual processes). You can do this using cgroups.
For example, to limit memory start by mounting the memory cgroups filesystem:
# mount cgroup -t cgroup -o memory /cgroups/memory
Then, create a new sub-directory for each group, e.g.
# mkdir /cgroups/memory/my-users
Put the processes you want constrained (process with PID "1234" here) into this group:
# cd /cgroups/memory/my-users
# echo 1234 >> tasks
Set the total memory limit for the group:
# echo 1000000 > memory.limit_in_bytes
If processes in the group fork child processes, they will also be in the group.
The above group sets the resident memory limit (i.e. constrained processes will start to swap rather than using more memory). Other cgroups let you constrain other things, such as CPU time.
You could either put your server process into the group (so that the whole system with all its users fall under the limits) or get the server to put each new session into a new group.
chroot, jail, container, VServer/OpenVZ/etc., are generally more secure than running as an unprivileged user, but lighter-weight than full OS virtualization.
Also, for Java, you might trust the JVM's built-in sandboxing, and for compiling C++, NaCl claims to be able to sandbox x86 code.
But as Checkers' answer states, it's been proven possible to cause malicious damage from almost any "sandbox" in the past, and I would expect more holes to be continually found (and hopefully fixed) in the future. Do you really want to be running untrusted code?
Reading the codepad.org/about page might give you some cool ideas.
http://codepad.org/about
Running under unprivileged user still allows a local attacker to exploit vulnerabilities to elevate privileges.
Allowing to execute code in a VM can be insecure as well; the attacker can gain access to host system, as recent VMWare vulnerability report has shown.
In my opinion, allowing running native code on your system in the first place is not a good idea from security point of view. Maybe you should reconsider allowing them to run native code, this will certainly reduce the risk.
Check out ulimit and friends for ways of limiting the underprivileged user's ability to DOS the machine.
Try learning a little about setting up policies for SELinux. If you're running a Red Hat box, you're good to go since they package it into the default distro.
This will be useful if you know the things to which the code should not have access. Or you can do the opposite, and only grant access to certain things.
However, those policies are complicated, and may require more investment in time than you may wish to put forth.
Use Ideone API - the simplest way.
try using lxc as a container for your apache server
Not sure about how much effort you want to put into this thing but could you run Xen like the VPS web hosts out there?
http://www.xen.org/
This would allow full root access on their little piece of the server without compromising the other users or the base system.