Dovecot process limits - freebsd

Sometimes my dovecot log return:
service(imap-login): process_limit (512) reached, client connections are being dropped
I can increase process_limit in dovecot config file, but i dont understand, how will it affect the system.
How to diagnose why process limit is too high? I have around 50 users in my postfix+dovecot+roundcube system.
My configuration:
FreeBSD 10.0-stable
Postfix 2.10
Dovecot 2.2.12

Dovecot have two modes for login processes.
First is called secure mode when each client is connected via its own process.
Second is called performance mode when single process serve all the clients.
In fact performance mode is not so insecure, but rather secure mode is paranoid.
You have to set desired mode in the config:
service imap-login {
inet_listener imap {
port = 143
}
inet_listener imaps {
port = 993
ssl = yes
}
# service_count = 0 # Performance mode
service_count = 1 # Secure mode
process_min_avail = 1
}
In my case performance mode serve to 1k+ users.

Related

Set chrony.conf file to "server" instead of "pool"

I'm working with RHEL 8 and the ntp package is no longer supported and it is implemented by the chronyd daemon which is provided in the chrony package. The file is set up to use public servers from the pool.ntp.org project (pool 2.rhel.pool.ntp.org iburst). Is there a way to set server instead of pool?
My chrony.conf file:
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
pool 2.rhel.pool.ntp.org iburst
Yes. Simply comment out the pool line and add a server one with the one you want:
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
# pool 2.rhel.pool.ntp.org iburst
server 0.africa.pool.ntp.org iburst
server 0.us.pool.ntp.org
server 0.south-america.pool.ntp.org

Gitlab and Exim conflicting 'from' addresses when sending emails

I have installed Gitlab 8.15 and Exim 4.84 on CentOS 7
Whenever Gitlab sends a message, it should come from 'gitlab#mydomain.nl' which is correctly set in config/gitlab.yml.
If I look in the log, I see the following:
2016-12-21 21:50:02 cwd=/ 6 args: /usr/sbin/sendmail -i -f gitlab#mydomain.nl -- mypersonal#gmail.com
2016-12-21 21:50:02 1cJnpq-0001ZR-NG <= git#vps.mydomain.nl U=git P=local S=3859 id=585aeafaad130_175126f0b9c43854#vps.mydomain.nl.mail T="Reset password instructions" from <git#vps.mydomain.nl> for mypersonal#gmail.com
Note that between those 2 lines, the from address changed from gitlab#mydomain.nl to git#vps.mydomain.nl which is based on user#FQDN.
My external SMTP server then does a DKIM and SPF lookup on vps.mydomain.nl, instead of mydomain.nl, which fails and the mail is rejected.
I am not sure where this change happens and how I should fix this. Is this something on Gitlab side or something on Exim side?
The relevant parts from my exim configuration:
begin routers
mysmtphost_email:
driver = manualroute
domains = ! +local_domains
ignore_target_hosts = 127.0.0.0/8
transport = mysmtphost_relay
route_list = * vps.mysmtphost.email::587
no_more
(...)
begin transports
mysmtphost_relay:
driver = smtp
port = 587
hosts_require_auth = <; $host_address
hosts_require_tls = <; $host_address
Just found out the user git was not part of the trusted_users directive in the exim.conf file. I changed it to include the user as follows:
trusted_users = mail:apache:passenger:git
I came to this conclusion since mails sent by other Rails applications running as user passenger were being sent correctly as specified by the from address. Then I noticed passenger being part of this directive and git not.
From the Exim documentation:
Trusted users are always permitted to use the -f option or a leading
“From ” line to specify the envelope sender of a message that is
passed to Exim through the local interface (see the -bm and -f options
below). See the untrusted_set_sender option for a way of permitting
non-trusted users to set envelope senders.
http://www.exim.org/exim-html-current/doc/html/spec_html/ch-the_exim_command_line.html#SECTtrustedadmin
Processes running as root or the Exim user are always trusted. Other
trusted users are defined by the trusted_users or trusted_groups
options. In the absence of -f, or if the caller is not trusted, the
sender of a local message is set to the caller’s login name at the
default qualify domain.

connect EADDRNOTAVAIL in nodejs under high load - how to faster free or reuse TCP ports?

I have a small wiki-like web application based on the express-framework which uses elastic search as it's back-end. For each request it basically only goes to the elastic search DB, retrieves the object and returns it rendered with by the handlebars template engine. The communication with elastic search is over HTTP
This works great as long as I have only one node-js instance running. After I updated my code to use the cluster (as described in the nodejs-documentation I started to encounter the following error: connect EADDRNOTAVAIL
This error shows up when I have 3 and more python scripts running which constantly retrieve some URL from my server. With 3 scripts I can retrieve ~45,000 pages with 4 and more scripts running it is between 30,000 and 37,000 pages Running only 2 or 1 scripts, I stopped them after half an hour when they retrieved 310,000 pages and 160,000 pages respectively.
I've found this similar question and tried changing http.globalAgent.maxSockets but that didn't have any effect.
This is the part of the code which listens for the URLs and retrieves the data from elastic search.
app.get('/wiki/:contentId', (req, res) ->
http.get(elasticSearchUrl(req.params.contentId), (innerRes) ->
if (innerRes.statusCode != 200)
res.send(innerRes.statusCode)
innerRes.resume()
else
body = ''
innerRes.on('data', (bodyChunk) ->
body += bodyChunk
)
innerRes.on('end', () ->
res.render('page', {'title': req.params.contentId, 'content': JSON.parse(body)._source.html})
)
).on('error', (e) ->
console.log('Got error: ' + e.message) # the error is reported here
)
)
UPDATE:
After looking more into it, I understand now the root of the problem. I ran the command netstat -an | grep -e tcp -e udp | wc -l several times during my test runs, to see how many ports are used, as described in the post Linux: EADDRNOTAVAIL (Address not available) error. I could observe that at the time I received the EADDRNOTAVAIL-error, 56677 ports were used (instead of ~180 normally)
Also when using only 2 simultaneous scripts, the number of used ports is saturated at around 40,000 (+/- 2,000), that means ~20,000 ports are used per script (that is the time when node-js cleans up old ports before new ones are created) and for 3 scripts running it breaches over the 56677 ports (~60,000). This explains why it fails with 3 scripts requesting data, but not with 2.
So now my question changes to - how can I force node-js to free up the ports quicker or to reuse the same port all the time (would be the preferable solution)
Thanks
For now, my solution is setting the agent of my request options to false this should, according to the documentation
opts out of connection pooling with an Agent, defaults request to Connection: close.
as a result my number of used ports doesn't exceed 26,000 - this is still not a great solution, even more since I don't understand why reusing of ports doesn't work, but it solves the problem for now.

Azure HTTP request timeout workaround

We currently have an application hosted on a Azure VM instance.
This application sometimes processes long-running and idle HTTP requests. This is causing an issue because Azure will close all connections that have been idle for longer than a few minutes.
I've seen some suggestions about setting a lower TCP keepalive rate. I've tried setting the this rate to around 45 seconds but my HTTP requests are still being closed.
Any suggestions? Our VM is running Server 2008 R2.
As a simple workaround, I had my script send a newline character every 5 seconds or so to keep the connection alive.
Example:
set_time_limit(60 * 30);
ini_set("zlib.output_compression", 0);
ini_set("implicit_flush", 1);
function flushBuffers()
{
#ob_end_flush();
#ob_flush();
#flush();
#ob_start();
}
function azureWorkaround($char = "\n")
{
echo $char;
flushBuffers();
}
$html = '';
$employees = getEmployees();
foreach($employee in $employees) {
html .= getReportHtmlForEmployee($employee);
azureWorkaround();
}
echo $html;
The Azure Load Balancer now supports configurable TCP Idle timeout for your Cloud Services and Virtual Machines. This feature can be configured using the Service Management API, PowerShell or the service model.
For more information check the announcement at http://azure.microsoft.com/blog/2014/08/14/new-configurable-idle-timeout-for-azure-load-balancer/

Graphite not graphing statsd requests

I've got graphite and statsd (nodejs 0.6.2) setup on a Ubuntu 11.04 running nginx 1.010 using uwsgi.
I can confirm that graphite is setup correctly as when I run the example python client it will being dropping data on the graph as it should. However, when I start statsd (it starts without error), and start my app that just loops and dumps stats I don't see any stats being graphed.
I've done tcpdump on port 8125 and I am seeing the request coming in. Any thoughts?
|your script| -> |statsd:8125|
Edit the statsd config file and change the backend to 'console'. Now start statsd and your script in parallel. The statsd terminal will start dumping output. (The default flushInterval is 10000ms)
|statsd:8125| -> |carbon/whisper|
tailf the log files from "/opt/graphite/storage/log/carbon-cache/carbon-cache-a". The latest one would be: console.log, creates.log, listener.log, query.log. Out of these, "creates.log" will tell you about the .wsp files being created. Ensure that the files are being created. These files reside in: "/opt/graphite/storage/whisper/stats".
For more info on the schema and config of your data being stored in there, use whisper-dump.py to read the .wsp file.
Sample output:
Meta data:
aggregation method: average
max retention: 157784400
xFilesFactor: 0.5
Archive 0 info:
offset: 52
seconds per point: 1
points: 10080
retention: 10080
size: 120960
Now ensure that the statsd config specifies "localhost" and "2003" as the addr and port.
Open localhost in your browser. You should have graphite. Select your parameters from the tab on left. You should have your graphs.

Resources