CentOS 6.3
munin 2.0.17-1
php54 (php-fpm)
nginx 1.2.6-1
Setup munin via the 'epel' repo and after tinkering, I got it working with multiple nodes. However the graph zoom did not work on any of the graphs. I ended up switching the *_strategy mode from HTML to cgi per a suggestion I found online - which made it so that none of the graphs are updating (since cgi isn't working) and the zoom is still broken.
All of the guides that I can find online (including the official: http://munin-monitoring.org/wiki/CgiHowto2) refer to using spawnfcgi (which I used to use on an older CentOS5 server) and spawning specific instances for this. However, I'm utilizing php-fpm rather than spawnfcgi on this server am having trouble adapting this to work.
By not working, I mean simply that the graph will not load on the 'zoom' screen but rather shows a broken image link. nginx error log shows:
2013/09/05 16:31:59 [error] 29384#0: *2 open() "/usr/share/nginx/vhosts/munin.mydomain.com/public_html/munin-cgi/munin-cgi-graph/mydomain.com/host.mydomain.com/postfix_mailvolume-pinpoint=1378299671,1378407671.png" failed (2: No such file or directory), client: 10.30.2.1, server: munin.mydomain.com, request: "GET /munin-cgi/munin-cgi-graph/mydomain.com/host.mydomain.com/postfix_mailvolume-pinpoint=1378299671,1378407671.png?&lower_limit=&upper_limit=&size_x=800&size_y=400 HTTP/1.1", host: "munin.mydomain.com", referrer: "http://munin.mydomain.com/static/dynazoom.html?cgiurl_graph=/munin-cgi/munin-cgi-graph&plugin_name=mydomain.com/host.mydomain.com/postfix_mailvolume&size_x=800&size_y=400&start_epoch=1378299671&stop_epoch=1378407671"
Here is the munin.conf:
[16:42:21]$ cat /etc/munin/munin.conf | sed -e '/^#/d' -e '/^$/d'
htmldir /usr/share/nginx/vhosts/munin.mydomain.com/public_html/
includedir /etc/munin/conf.d
graph_strategy cgi
cgiurl_graph /munin-cgi/munin-cgi-graph
html_strategy cgi
[host.mydomain.com]
address 127.0.0.1
use_node_name yes
[otherhost.mydomain.com]
address 1.2.3.4
use_node_name yes
Here is the vhost for nginx:
[16:44:16]$ cat /etc/nginx/conf.d/vhosts/munin.thegnomedev.com.conf | sed -e '/^$/d' -e '/^#/d'
server {
listen 80;
server_name munin.mydomain.com;
access_log /var/log/nginx/munin.mydomain.com combined;
error_log /var/log/nginx/error.log warn;
rewrite_log on;
root /usr/share/nginx/vhosts/munin.mydomain.com/public_html/;
index index.php index.html index.htm;
location / {
auth_basic "Restricted";
auth_basic_user_file /usr/share/nginx/vhosts/munin.mydomain.com/.htpasswd;
}
location ^~ /cgi-bin/munin-cgi-graph/ {
fastcgi_split_path_info ^(/cgi-bin/munin-cgi-graph)(.*);
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock;
include fastcgi_params;
}
location /munin/static/ {
alias /etc/munin/static/;
}
location /munin/ {
fastcgi_split_path_info ^(/munin)(.*);
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock;
include fastcgi_params;
}
# Deny hidden file types
location ~ /(\.ht|\.git|\.svn) {
deny all;
}
}
At this point, I'm frustrated enough that I think I'm hitting brain lock. I'll admit, it's likely that my lack of full understanding of nginx's syntax as well as how it interacts with php-fpm is probably to blame - especially if there is a simply syntax change that I can make to have this working.
Any help with resolving this with my existing stack would be most appreciated. Have been googling and trying various things for the better part of the day.
Thanks
This is a bug related to SELinx in RHEL according to https://bugzilla.redhat.com/show_bug.cgi?format=multiple&id=1000736.
Description of problem:
zooming doesn't work when selinux is in enforcing mode
Version-Release number of selected component (if applicable):
munin-2.0.17-1.el6.noarch
selinux-policy-3.7.19-195.el6_4.12.noarch
selinux-policy-targeted-3.7.19-195.el6_4.12.noarch
Steps to Reproduce:
1. click on munin graph to zoom in
Actual results:
no graph image
Expected results:
graph image
Additional info:
it works with selinux in permissive mode
If you disable SELinux it work fine:
sudo setenforce 0
According to the last comment in the bug report this should be fixed in RHEL 6.5 (Centos should pick it up).
You've mapped location /cgi-bin/munin-cgi-graph/ via FastCGI passthrough to PHP-FPM, but that works for PHP scripts, not for arbitrary CGI scripts such as Munin's CGI grapher, which is actually Perl. To make that CGI script speak the FastCGI protocol, the wrapper you need to use would be the generic spawn-fcgi.
Related
I want to deploy a solution built in ASPnet 4.0 framework application, the development was done in WindowsSO / IIS.
Now I need deploy on a Operating System Centos using as NginX Web server.
I'm using Centos-6.8, Nginx-1.10 and mono-4.6.1.5.
My nginx configuration file is:
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
root /usr/share/nginx/html;
#root /usr/share/nginx/html/Site;
location / {
index index.aspx index.html index.htm index.aspx default.aspx Default.aspx Global.asax;
fastcgi_index Global.asax;
fastcgi_pass 127.0.0.1:9000;
include fastcgi_params;
}
...
}
File fastcgi_params I include:
fastcgi_param PATH_INFO "";
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
Start service mono
fastcgi-mono-server4 /applications=/:/usr/share/nginx/html /socket=tcp:127.0.0.1:9000 &
I access to http://localhost/Default.aspx for testing, if it shows result.
The problem is when access to the Site folder http://localhost/Site/, displays the following message:
System.Web.HttpException
This type of page is not served.
Description: HTTP 403.The type of page you have requested is not served because it has been explicitly forbidden. The extension '.asax' may be incorrect. Please review the URL below and make sure that it is spelled correctly.
Details: Requested URL: /Site/Global.asax
Exception stack trace:
at System.Web.HttpForbiddenHandler.ProcessRequest (System.Web.HttpContext context) [0x00073] in <d3ba84a338d241e2ab5397407351c9cd>:0
at System.Web.HttpApplication+<Pipeline>c__Iterator1.MoveNext () [0x00dd7] in <d3ba84a338d241e2ab5397407351c9cd>:0
at System.Web.HttpApplication.Tick () [0x00000] in <d3ba84a338d241e2ab5397407351c9cd>:0
The nginx user has privileges on the Site folder, may be missing some dependency in the Web.Config of the application, any ideas? Thank you.
It sounds like there is a permission issue. Please check do be sure that the permissions on the Site is 755. If not please 'sudo chmod -R 755' the folder.
Some further reading.
https://askubuntu.com/questions/9402/what-file-permissions-should-i-set-on-web-root
I need to set the nginx configurations such that the URL "http://host/cgi-bin/hw.sh/some/path/to/data/" should trigger the shell script "hw.sh" present under path "/usr/lib/cgi-bin/".
Now, according to the instructions mentioned in page https://www.howtoforge.com/serving-cgi-scripts-with-nginx-on-debian-squeeze-ubuntu-11.04-p3, we need to set the configurations under a ".vhost" file. But I have a default file already present under path "/etc/nginx/sites-available/default" instead of a .vhost file.
And when I use the same configurations, I get HTTP/1.1 403 Forbidden error. I have made sure that the script has required executable rights too. Below is the error received in nginx logs.
FastCGI sent in stderr: "Cannot get script name, are DOCUMENT_ROOT and SCRIPT_NAME (or SCRIPT_FILENAME) set and is the script executable?"
while reading response header from upstream,
client: host_ip, server: localhost,
request: "HEAD /cgi-bin/hw.sh/some/path/to/data/ HTTP/1.1",
upstream: "fastcgi://unix:/var/run/fcgiwrap.socket:", host: "host_ip"
I need help in writing the correct configuration so that my URL above executes the hw.sh script under the path mentioned above and returns proper output. Could someone please help me out here?
Below is my configuration used under default file.
server {
listen 80 default_server;
[...]
location /cgi-bin/ {
# Disable gzip (it makes scripts feel slower since they have to complete
# before getting gzipped)
gzip off;
# Set the root to /usr/lib (inside this location this means that we are
# giving access to the files under /usr/lib/cgi-bin)
root /usr/lib;
# Fastcgi socket
fastcgi_pass unix:/var/run/fcgiwrap.socket;
# Fastcgi parameters, include the standard ones
include /etc/nginx/fastcgi_params;
# Adjust non standard parameters (SCRIPT_FILENAME)
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
[...]
}
Line "fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;" in the configuration was causing the problem.
When i changed it to "fastcgi_param SCRIPT_FILENAME $request_filename;", everything is working as expected.
I am trying to host an asp.net web api on linux with mono and nginx. Any mvc web application (.aspx) will host perfectly, but if I try to host a web api (.asax) I'll get an error (403).
I've installed mono and nginx correctly. My virtual host configuration are working for mvc web application, but not for an asp.net web api.
My virtual host configuration looks like this (I've added Global.asax as a possible index file and replaced index.aspx with Global.asax in the location "/"):
server {
#listen 80; ## listen for ipv4; this line is default and implied
#listen [::]:80 default_server ipv6only=on; ## listen for ipv6
root /var/www/webapp/;
index index.html index.htm index.aspx Global.asax default.aspx;
# Make site accessible from http://localhost/
server_name mydomain.com;
location / {
try_files $uri $uri/ /Global.asax;
}
location ~ \.(aspx|asmx|ashx|asax|ascx|soap|rem|axd|cs|config|dll)$ {
fastcgi_pass 127.0.0.1:9000;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
I'm also using fastcgi. This is a simple startup script for fastcgi
#!/bin/sh
[cutting out....]
DAEMON=/usr/local/bin/mono
NAME=monoserver
DESC=monoserver
MONOSERVER=$(which fastcgi-mono-server4)
MONOSERVER_PID=$(ps auxf | grep fastcgi-mono-server4.exe | grep -v grep | awk '{print $2}')
WEBAPPS="mydomain.com:/:/var/www/webapp"
case "$1" in
start)
if [ -z "${MONOSERVER_PID}" ]; then
echo "starting mono server"
${MONOSERVER} /applications=${WEBAPPS} /socket=tcp:127.0.0.1:9000 &
echo "mono server started"
[cutting out....]
All these settings are working fine with a normal aspx web page, but if I try to host a web api, I'll get an error like this: Error 403 Server Error in '/' Application This type of page is not served.
If I try to host this web api on windows with iis, everything works finde, so I think the configuration should be okay.
Has anyone an idea, what my problem exactly is?
I have followed this website http://raspberrypihelp.net/tutorials/24-raspberry-pi-webserver to setup the HTTP server nginx on my Raspberry Pi and try to setup a site call example.com. But when I run sudo service nginx restart, it said
Restarting nginx: nginx: [emerg] unknown directive " " in /etc/nginx/sites-enabled/example.com:3
Here is the code in example.com.
server {
server_name example.com 192.168.1.88;
access_log /srv/www/example.com/logs/access.log;
error_log /srv/www/example.com/logs/error.log;
root /srv/www/example.com/public/;
location / {
index index.php index.html index.htm;
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
include /etc/nginx/fastcgi_params;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /srv/www/example.com/public$fastcgi_script_name;
}
location /phpmyadmin {
root /usr/share/;
index index.php index.html index.htm;
location ~ ^/phpmyadmin/(.+\.php)$ {
try_files $uri =404;
root /usr/share/;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include /etc/nginx/fastcgi_params;
}
location ~* ^/phpmyadmin/(.+\.(jpg|jpeg|gif|css|png|js|ico|html|xml|txt))$ {
root /usr/share/;
}
}
location /phpMyAdmin {
rewrite ^/* /phpmyadmin last;
}
}
I am just following the steps but it can't run successfully.
I had the same problem which was that I copy/pasted the config code from the web and some dirty EOL(end of line) characters where there.
The editor didn't show them, but nginx treated them like a directive.
Just deleted every EOL and added again.
It sounds like you did some copy and paste work here. It's not uncommon to snag some extra characters that are invisible at the end of line (EOL). Try this:
Run your text through this tool:
http://www.textfixer.com/tools/remove-line-breaks.php
then fix any breaks that may have been removed and will be affected by the comments.
This worked for me. Hope it works for you.
It looks like the nginx binary was compiled with --without-http_fastcgi_module option.This is not default. Try donwloading or compiling a different binary.
Try running
nginx -V
(with uppercase V) to see what options were used to compile the nginx.
I edited some text in the mid of the conf file and nginx started showing this error at the starting of the file itself. I copied the contents of the file, created a new file, pasted the contents there and nginx stopped showing this error.
I faced similar issue with error message as "unknown directive 'index.html'" when running 'sudo nginx -t'. After correcting the HTML syntax errors in index.html, the issue was resolved.
Even if you miss a semicolon you will encounter the same error.
// Missed semicolon
fastcgi_pass unix:/var/run/php/php8.0-fpm.sock
// With semicolon
fastcgi_pass unix:/var/run/php/php8.0-fpm.sock;
In my case, I have store configuration file nginx.conf in the github. I have done wget to raw version of code, thus resulted this error.
Later, I have cloned my repository and used the nginx.conf file from clone and issue got resolved.
Really didn't got how that happened,
I've "docker run " my nginx 1.14 (default apt for debian:bullseye) to extract it's default nginx.conf, with intention to update it and than use it into my Dockerfile.
Anyhow, after having read comments in this thread, found this that the file is
"UTF-16LE" ... I'm not really expert, but is not "UTF-8".
solved as:
Issue seen from inside the container:
me#docker-nginx $ head nginx.conf
��
user www-data:qgis;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
error_log /dev/stdout warn;
events {
worker_connections 768;
me#docker-nginx $ dos2unix nginx.conf
dos2unix: converting UTF-16LE file nginx.conf to UTF-8 Unix format...
Solved also on working dir:
IntelliJ IDEA: select "convert" after "UTF-8"
I had the same problem when I looked inside my config file there was a syntax error so this might be the problem check this path
nano /etc/nginx/sites-available/example.com
I have an ubuntu-server and a pretty high loaded website. Server is:
Dedicated to nginx, uses php-fpm (no apache), mysql is located on different machine
Has 8 GB of RAM
Gets about 2000 requests per second.
Each php-fpm process consumes about 65MB of RAM, according to top command:
Free memory:
admin#myserver:~$ free -m
total used free shared buffers cached
Mem: 7910 7156 753 0 284 2502
-/+ buffers/cache: 4369 3540
Swap: 8099 0 8099
PROBLEM
Lately, I'm experiencing big performance problems. Very big response times, very many Gateway Timeouts and in evenings, when load gets high, 90% of the users just see "Server not found" instead of the website (I cannot seem to reproduce this)
LOGS
My Nginx error log is full of the fallowing messages:
2012/07/18 20:36:48 [error] 3451#0: *241904 upstream prematurely closed connection while reading response header from upstream, client: 178.49.30.245, server: example.net, request: request: "GET /readarticle/121430 HTTP/1.1", upstream: "fastcgi://127.0.0.1:9001", host: "example.net", referrer: "http://example.net/articles"
I've tried switching to unix socket, but still get those errors:
2012/07/18 19:27:30 [crit] 2275#0: *12334 connect() to unix:/tmp/fastcgi.sock failed (2: No such file or directory) while connecting to upstream, client: 84.
237.189.45, server: example.net, request: "GET /readarticle/121430 HTTP/1.1", upstream: "fastcgi://unix:/tmp/fastcgi.sock:", host: "example.net", referrer: "http
://example.net/articles"
And php-fpm log is full of these:
[18-Jul-2012 19:23:34] WARNING: [pool www] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 32 children, there are 0 idle, and 75 total children
I've tried to increase given parameters up to 100, but it still seems not enough.
CONFIGS
Here is my current configuration
php-fpm
listen = 127.0.0.1:9001
listen.backlog = 4096
pm = dynamic
pm.max_children = 130
pm.start_servers = 40
pm.min_spare_servers = 10
pm.max_spare_servers = 40
pm.max_requests = 100
nginx
worker_processes 4;
worker_rlimit_nofile 8192;
worker_priority 0;
worker_cpu_affinity 0001 0010 0100 1000;
error_log /var/log/nginx_errors.log;
events {
multi_accept off;
worker_connections 4096;
}
http {
include mime.types;
default_type application/octet-stream;
access_log off;
sendfile on;
keepalive_timeout 65;
gzip on;
# fastcgi parameters
fastcgi_connect_timeout 120;
fastcgi_send_timeout 180;
fastcgi_read_timeout 1000;
fastcgi_buffer_size 128k;
fastcgi_buffers 4 256k;
fastcgi_busy_buffers_size 256k;
fastcgi_temp_file_write_size 256k;
fastcgi_intercept_errors on;
client_max_body_size 128M;
server {
server_name example.net;
root /var/www/example/httpdocs;
index index.php;
charset utf-8;
error_log /var/www/example/nginx_error.log;
error_page 502 504 = /gateway_timeout.html;
# rewrite rule
location / {
if (!-e $request_filename) {
rewrite ^(.*)$ /index.php?path=$1 last;
}
}
location ~* \.php {
fastcgi_pass 127.0.0.1:9001;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_script_name;
include fastcgi_params;
}
}
}
I would be very grateful for any advice on how to identify the problem and what parameters I can adjust to fix this. Or maybe 8GB of RAM is just not enough for this kind of load?
A number of issues. Still worth to fix them with such a busy site. MySQL may be the root cause for now. But longer term you need to do more work.
Caching
I see one of your error msg showing a get request to the php upstream. This doesn't look good with such a high traffic site (2000 r/s as you mentioned). This page (/readarticle/121430) seems a perfectly cacheable page. For one, you can use nginx for caching such pages. Check out fastcgi cache
GET /readarticle/121430
php-fpm
pm.max_requests = 100
The value means that a process will be killed by php-fpm master after serving 100 requests. php-fpm uses that value to fight against 3rd party memory leaks. Your site is very busy, with 2000r/s. Your max child processes is 130, each can only serve at most 100 requests. That means after 13000/2000 = 6.5 seconds all of them are going to be recycled. This is way too much (20 processes killed every second). You should at least start with a value of 1000 and increase that number as long as you don't see memory leak. Someone uses 10,000 in production.
nginx.conf
Issue 1:
if (!-e $request_filename) {
rewrite ^(.*)$ /index.php?path=$1 last;
}
should be replaced by more efficient try_files:
try_files $uri /index.php?path=$uri;
You save an extra if location block and a regex rewrite rule match.
Issue 2: using unix socket will save you more time than using ip (around 10-20% from my experience). That's why php-fpm is using it as default.
Issue 3: You may be interested in setting up keepalive connections between nginx and php-fpm. An example is given here in nginx official site.
I need to see your php.ini settings and I don't think this is related to MySQL since you're getting socket errors it looks like. Also, is this something that begins to start happening after a period of time or does it immediately happen when the server restarts?
Try restarting the php5-fpm daemon and see what happens while tailing your error log.
Check your php.ini file and also all your fastcgi_params typically located in /etc/nginx/fastcgi_params. There are a ton of examples for what you're trying to do.
Also, do you have the apc php caching extension enabled?
It will look like this in your php.ini file if your on a lamp stack:
extension=apc.so
....
apc.enabled=0
Probably wouldn't hurt to do some mysql connection load testing from the command line as well and see what the results are.
Setting up nginx microcache would help as well.
Which will serve the same response for a few seconds.
http://seravo.fi/2013/optimizing-web-server-performance-with-nginx-and-php
has some good info on nginx performance.
Personally followed that and i'm quite happy.
for the sake of having an answer for this question:
You should check your MySQL server. Probably it's overloaded or it limits count of parallel MySQL connections. You should find the bottleneck. And according to your top screenshot it doesn't look like either RAM or CPU, then it's most likely I/O. - #VBrat
Things you might want to do in the future :
1- Increase your RAM size.
2- use cache. see this article on how cache can speed up your site
3- reduce the number of queries that are executed.
Setup the APC extention for PHP (check/configure)
MySQL - Check configuration, indexs, slow queries
Install and configure Varnish. This can cache page requests and be quite useful in reducing the amount of php requests and mysql queries you need to make. It can be tricky with cookies/ssl but otherwise not too difficult and very worthwhile to get running