How to prevent puppetserver tmpdir 100% usage - puppet

I have installed puppetserver-6.10.0-1 on 16 cores/8G RAM virtual machine with CentOS 7. Puppet server manages 700+ other machines. In /etc/sysconfig/puppetserver I added "-Djava.io.tmpdir=/var/tmp" to JAVA_ARGS, where was 15G free space. After two days this directory is full with jruby-* directories and puppetserver is broken
So my questions:
Is it wrong behavior?
Is there a way to prevet full filling or I just need more disk space?

Problem solved by adding RAM to machine and increasing the Xmx java option in /etc/sysconfig/puppetserver:
JAVA_ARGS="-Xms7g -Xmx20g -Djruby.logger.class=com.puppetlabs.jruby_utils.jruby.Slf4jLogger -Djava.io.tmpdir=/var/tmp"

Related

does docker manage filesystem like a standalone OS?

I have a program I'm running in a docker container. After 10-12 hours of run, the program terminated with filesystem-related errors (FileNotFoundError, or similar).
I'm wondering if the disk space got filled up or a similar filesystem-related issue or there was a problem in my code (e.g one process deleted the file pre-maturely).
I don't know much about docker management of files and wonder if inside docker it creates and manages its own FS or not. Here are three possibilities I'm considering and mainly wonder if #1 could be the case or not.
If docker manages it's own filesystem, could it be that although disk space is available on the host machine, docker container ran out of it's own storage space? (I've seen similar issues regarding running out of memory for a process that has limited memory artificially imposed using cgroups)
Could it be that host filesystem ran out of space and the files got corrupted or maybe didn't get written correctly?
There is some bug in my code.
This is likely a bug in your code. Most programs print the error they encounter, and when a program encounters out-of-space, the error returned by the filesystem is: "No space left on device" (errno 28 ENOSPC).
If you see FileNotFoundError, that means the file is missing. My best theory is that it's coming from your consumer process.
It's still possible though, that the file doesn't exist because the producer ran out of space and you didn't handle the error correctly - you'll need to check your logs.
It might also be a race condition, depending on your application. There's really not enough details to answer that.
As to the title question:
By default, docker just overlay-mounts an empty directory from the host's filesystem into the container, so the amount of free space on the container is the same as the amount on the host.
If you're using volumes, that depends on the storage driver you use. As #Dan Serbyn mentioned, the default limit for the devicemapper driver is 10 GB. The overlay2 driver - the default driver - doesn't have that limitation.
In the current Docker version, there is a default limitation on the Docker container storage of 10 GB.
You can check the disk space that containers are using by running the following command:
docker system df
It's also possible that the file your container is trying to access has access level restrictions. Try to make it available for docker or maybe everybody (chmod 777 file.txt).

How do I get my VPS to start using the extra disk space I purchased via HostMonster?

My VPS is using centOS. Through my hosting company I am provided cPanel and WHM. I keep getting warning emails that my main disk is nearly full. I was under the assumption that upgrading to more disk space would solve the problem, but this has not been the case. What do I need to do so that my VPS will effectively make use of this extra space?
Thanks!
You have to extend given partition.
Identify which partition to resize (check twice - mistake can cost you system failure)
Make backup :)
Resize this partition
Resize filesystem

Docker on Windows Nano Server: There is not enough space on the disk

I'm sucessfully started docker on Windows Server 2016 Nano Server.
I've pulled images microsoft/nanoserver and microsoft/sample-dotnet
But when I tried to pull another images, like microsoft/dotnet-framework I've got the following message:
"docker : write C:\Windows\TEMP\GetImageBlob193586394: There is not enough space on the disk."
I'm using the Nano Server on Azure with a 512 GB SSD. And I've just deployed the OS.
Anyone knows what is happening?
Thank you!
so your free disc space is 1 GB of 7 GB?
of course this is a bit too less. Probably you already pulled a servercore images which uses around 7 GB.
you need to expand your partition size:
https://technet.microsoft.com/de-de/library/hh848680(v=wps.630).aspx

hung_task_timeout_secs error during copy to a mount point in linux

I am trying to copy data files from my VM to a NFS VM- ZFS Storage(Both VM's can talk to each other). During copy sometimes I encounter error:
INFO: task cp: blocked for more than 120 seconds .
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs disables this message"
Both my VM's hang and I have to restart them. If I copy again it works.
I have around 233 data files to copy and its becoming difficult to restart VM's again and again.
I looked at the solutions given on internet and changed the vm.dirty_ratio to 5 and vm.dirty_background_ratio to 10 to resolve but it did not work.
I am running these VM's on virtual box and allocated around 17GB RAM for one and the NFS VM around 6GB RAM.
Any hack which could help me in copying these files to the NFS without my VM's hanging?
I am sorry if I am answering an answer with another answer, but this case has many variables that need exploring.
1, you have a Linux VM sharing your storage (assumption)
A. Which distro ? 32 or 64 bits ? When the problem happens what does top reports for system load ?
B. Local storage or nas ? Or San ?
C. Which version of NFS ? 3 or 4 ?
D. Can you set the variables of your mount when mapping the NFS share? You might want to play with rsize and wsize, setting them to at least 64000. I would recommend also setting noatime and nodiratime on the share.
E. From my VMware background with Gluster, there are some timeout/refresh settings you can set on the storage side. How often the storage publishes its presence, telling it is alive. A good start is 20 seconds.
F. VMware can tell you how much latency you have for read or write on a physical and on a VM level. Try to figure out those to know who to blame.
Ah, and, of course, make sure your Linux VM has the latest patches applied.
Let's see where we get from here.

Jenkins / Hudson CI Minimum Requirements for a linux RH installation

We are planning on using Jenkins (used to be Hudson) for the automated builds of our project.
I need to find out what it needs from a system requirements standpoint (RAM, disk, CPU) for a Linux RH installation.
We will be testing a Mobile application project.
I did check this post but couldn't find a response.
I've been maintaining a Jenkins / Sonar / Nexus and I pointed out a minimal configuration (Debian 5):
CPU : n/a (bye bye plain old school CPU at least ;) )
RAM : 1GB (I prefer 2)
HDD : depends on the needs. For my use, a 8 module J2EE Maven project + db scripts (6500 lines of code) represents less of 50 MB. I configured Jenkins to store 10 builds (500 MB)
However, if Jenkins has to manage several projects at the time, you have to consider few things:
keep Jenkins data in a separate directory of the system (default install may put those in /usr) with the Jenkins config way of your choice
mount a dedicated HDD partition on this directory and let you a way to manage space on disk (virtual drive, partition resizing tool...)
supervising activity to prevent space left and avoid an angry boss :) (Nagios for example)
Think about backup, other application on server, and a important thing - Jenkins resources depends on JVM capacity.

Resources