Change number of threads for Jenkins server - multithreading

I installed Jenkins on my vserver. When I had a look at htop Jenkins was running with 30 thread, each was allowed to allocate 247MB memory and up to 1181MB virtual memory.
Because I've only a small vserver I tried to change the number of threads. But I could not find any configuration file.
I installed jenkins via aptitude install jenkins and in htop I can see that Jenkins is running from: /usr/bin/java -jar /usr/share/jenkins/jenkins.war
Tomcat isn't installed as well as jetty isn't installed.
Where is the information about the number of threads saved? Or how can I reduce the number of threads for Jenkins?

http://winstone.sourceforge.net/#commandLine is the official command-line reference - but as I mention in the comment --handlerCountMax (or --handlerCountStartup, for that matter) did not seem to work for me. Try it yourself (here's how to run Jenkins in stand-alone mode).
You may want to try to reduce the number of executors as well as disabling plugins you do not need and see what happens.
Please keep in mind, however, that if you plan to continue using Jenkins seriously, you should plan for more resources, not less: as the number of your jobs grows, so will the resource utilization.

You can also cut down the number of executors you have in your node. This may or may not help - it may be that Jenkins is smart enough to kill the thread when it isn't using an executor. Still, some more information would be useful: How many jobs do you have? What plugins are installed? With more details I could give better advice.

Related

CoreOS alternative to /usr/lib/systemd/system-shutdown/

I recently stumbled across the fact that on shutdown/reboot any script in /usr/lib/systemd/system-shutdown will get executed before the shutdown starts.
Paraphrasing - https://www.freedesktop.org/software/systemd/man/systemd-halt.service.html
With the /usr filesystem being read only on CoreOS I cannot put any of my shutdown scripts in /usr/lib/systemd/system-shutdown. I'm hoping someone more knowledgeable about CoreOS and systemd knows an alternate directory path on CoreOS nodes that would give me the same results. Or a configuration that I can adjust to point the directory to /etc/systemd/system-shutdown or something else.
Optionally any pointers on creating a custom service that does the same thing as systemd-shutdown.
My use case is that I have a few scripts that I want to execute when a node shutsdown. For example remove the node from the monitoring system, unschedule the node in kubernetes and drain any running pods while allowing in flight transactions to finish.

How to raise or lower the log level in puppet master?

I am using puppet 3.2.3, passenger and apache on CentOS 6. I have 680 compute nodes in a cluster along with 8 gateways users use to log in to the cluster and submit jobs. All the nodes and gateways are under puppet control. I recently upgraded from 2.6. The master logs to syslog as desired, but how to change the log level for the master escapes me. I appear to have the choice of --debug, or nothing. Debug logs far too much detail, while not using that switch simply logs each time passneger/apache launch a new worker to handle incoming connections.
I find nothing in the on-line docs about doing this. What I want is to log each time a nodes hits the server; but I do not need to see the compiled catalogue, or resources in/var/log/messages.
How is this accomplished?
This is a hack, but here is how I solved the problem. In the file (config.ru) that passenger uses to launch puppet via rack middleware, which in my system lives in /usr/share/puppet/rack/puppetmasterd, I noticed these lines
require 'puppet/util/command_line'
run Puppet::Util::CommandLine.new.execute
So, this I edited to become
require 'puppet/util/command_line'
Puppet::Util::Log.level = :info
run Puppet::Util::CommandLine.new.execute
I suppose other choices for Log.level could be :warn and others.

Script killing too long process

I'm a webhosting owner, I don't know why currently, but I have some php scripts that are launched for many hours (as personnaly known customers), so I think there is a bug somewhere.
These scripts are eating the RAM AND the swap... So I'm looking for a way to list processes, find the execution time, kill them 1 by 1 if the execution exceed 10 or 20 minutes.
I'm not a bash master, but I know bash and pipes. The only thing I don't know, is how to list the processes (with execution time AND complete command line with arguments). Actually, even in top (then c) there is no arguments in php :/
Thanks for your help.
If you are running Apache with mod_php, you will not see a separate PHP process since the script is actually running inside an Apache process. If you are running as FastCGI, you also might not see a distinguishable PHP process for the actual script execution, though I have no experience with PHP/FastCGI and might be wrong on this.
You can set the max_execution_time option, but it is overridable at run time by calling set_time_limit() unless you run in Safe Mode. Safe mode, however, has been deprecated in PHP 5.3 and removed in 5.4, so you cannot rely on it if you are on 5.4 or plan to upgrade.
If you can manage it with your existing customers (since in some cases it requires non-trivial changes to PHP code), running PHP as CGI should allow you to monitor the actual script execution, as each CGI request will spawn a separate PHP interpreter process and you should be able to distinguish between the scripts they are executing. Note, however, that CGI is the least effective setup (the others being mod_php and FastCGI).
You can use the ps -aux command to list the processes with some detailed information.
You can also check out the ps man page.
This might also be of some help.

How can I write a script to keep HSQLDB running in case the process is killed

I'm running HSQLDB in server mode on a Linux server and finding that it occasionally gets killed. I'd like to be able to detect that it's stopped running and then kick off a process that starts it up again.
The DB isn't running very often, so polling would have to be very frequent--once every five minutes.
Look at Monit:
Monit is a free open source utility for managing and monitoring, processes, files, directories and filesystems on a UNIX system. Monit conducts automatic maintenance and repair and can execute meaningful causal actions in error situations.
If you are using soem type of Debian, you might try installing HSQLDB using "apt-get install hsqldb-server. That will give you a nice install and the ability to start with "/etc/init.d/hsqldb-server start"
This will also take care of restarting it if your machine reboots. If you get everything installed correctly the problem of it getting killed may just go away.
I was running into some weird issues starting and stopping hsqldb, but once I got it installed correctly everything took care of itself.

Secure way to run other people code (sandbox) on my server?

I want to make a web service that runs other people's code locally. Naturally, I want to limit their code's access to a certain "sandbox" directory, so that they won't be able to connect to other parts of my server (DB, main webserver, etc.)
What's the best way to do this?
Run VMware/Virtualbox:
+ I guess it's as secure as it gets. Even if someone manage to "hack", they only hack the guest machine
+ Can limit the CPU & memory the processes use
+ Easy to set up - just create the VM
- Harder to "connect" the sandbox directory from the host to the guest
- Wasting extra memory and CPU for managing the VM
Run underprivileged user:
+ Doesn't waste extra resources
+ Sandbox directory is just a plain directory
? Can't limit CPU and memory?
? I don't know if it's secure enough
Any other way?
Server running Fedora Core 8, the "other" codes written in Java & C++
To limit CPU and memory, you want to set limits for groups of processes (POSIX resource limits only apply to individual processes). You can do this using cgroups.
For example, to limit memory start by mounting the memory cgroups filesystem:
# mount cgroup -t cgroup -o memory /cgroups/memory
Then, create a new sub-directory for each group, e.g.
# mkdir /cgroups/memory/my-users
Put the processes you want constrained (process with PID "1234" here) into this group:
# cd /cgroups/memory/my-users
# echo 1234 >> tasks
Set the total memory limit for the group:
# echo 1000000 > memory.limit_in_bytes
If processes in the group fork child processes, they will also be in the group.
The above group sets the resident memory limit (i.e. constrained processes will start to swap rather than using more memory). Other cgroups let you constrain other things, such as CPU time.
You could either put your server process into the group (so that the whole system with all its users fall under the limits) or get the server to put each new session into a new group.
chroot, jail, container, VServer/OpenVZ/etc., are generally more secure than running as an unprivileged user, but lighter-weight than full OS virtualization.
Also, for Java, you might trust the JVM's built-in sandboxing, and for compiling C++, NaCl claims to be able to sandbox x86 code.
But as Checkers' answer states, it's been proven possible to cause malicious damage from almost any "sandbox" in the past, and I would expect more holes to be continually found (and hopefully fixed) in the future. Do you really want to be running untrusted code?
Reading the codepad.org/about page might give you some cool ideas.
http://codepad.org/about
Running under unprivileged user still allows a local attacker to exploit vulnerabilities to elevate privileges.
Allowing to execute code in a VM can be insecure as well; the attacker can gain access to host system, as recent VMWare vulnerability report has shown.
In my opinion, allowing running native code on your system in the first place is not a good idea from security point of view. Maybe you should reconsider allowing them to run native code, this will certainly reduce the risk.
Check out ulimit and friends for ways of limiting the underprivileged user's ability to DOS the machine.
Try learning a little about setting up policies for SELinux. If you're running a Red Hat box, you're good to go since they package it into the default distro.
This will be useful if you know the things to which the code should not have access. Or you can do the opposite, and only grant access to certain things.
However, those policies are complicated, and may require more investment in time than you may wish to put forth.
Use Ideone API - the simplest way.
try using lxc as a container for your apache server
Not sure about how much effort you want to put into this thing but could you run Xen like the VPS web hosts out there?
http://www.xen.org/
This would allow full root access on their little piece of the server without compromising the other users or the base system.

Resources