nginx access.log long comma separated GET requests - security

I am not sure yet, but I am seeing lots of what looks like attacks using the pagepeeker API (based on the source IP addresses). Granted it could be a problem with my nginx, haproxy load balancer, or php-fpm configurations.
Has anyone seen something similar? I have removed my domain name and replaced with an example domain in the log example below.
144.76.235.110 - - [06/Jul/2014:01:20:15 +0100] "GET /wp-admin/network/,%20https:/www.domain.org/wp-admin/network/,%20https:/www.domain.org/wp-admin/network/,%20https:/www.domain.org/wp-admin/network/,%20https:/www.domain.org/wp-admin/network/,%20https:/www.domain.org/wp-admin/network/,%20https:/www.domain.org/wp-admin/network/ HTTP/1.1" 301 5 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.94 Safari/537.4"
this is one of the virtual host files in question. Have have not seen a re-occurrence of the errors as yet.
NOTE: My SSL termination is done by HAPROXY.
server {
listen 127.0.0.1:8080;
server_name ftmon.org;
port_in_redirect off;
return 301 $real_scheme://www.ftmon.org$request_uri;
}
server {
listen 127.0.0.1:8080;
port_in_redirect off;
server_name www.ftmon.org;
root /home/wpmsite/domains/ftmon.org/public_html;
access_log /home/wpmsite/domains/ftmon.org/logs/access.log main;
error_log /home/wpmsite/domains/ftmon.org/logs/error.log error;
index index.php index.html index.htm;
autoindex off;
pagespeed on;
# Allow pagespeed to bypass the load balancer (HAPROXY)
pagespeed MapOriginDomain http://127.0.0.1:8080 https://www.ftmon.org;
pagespeed MapOriginDomain http://127.0.0.1:8080 http://www.ftmon.org;
# Allow pagespeed to bypass nginx
# pagespeed LoadFromFile http://www.ftmon.org /home/wpmsite/domains/ftmon.org/public_html;
# pagespeed LoadFromFile https://www.ftmon.org /home/wpmsite/domains/ftmon.org/public_html;
include /etc/nginx/common/pagespeed.conf;
# Rewrites for my reset site move to fresh multisite install.
rewrite ^(/files/)(.*)$ /wp-content/uploads/$2 permanent;
rewrite ^(/wp-content/blogs.dir/22/files/)(.*)$ /wp-content/uploads/$2 permanent;
location / {
try_files $uri $uri/ #wpmulti #memcached /index.php$is_args$args;
}
include /etc/nginx/common/locations.conf;
include /etc/nginx/common/wpcommon.conf;
#
# caching solutions
#
# include /etc/nginx/common/wordfence.conf;
include /etc/nginx/common/wpffpc.conf;
}

I solved my problem by disabling Pagespeed.
pagespeed off;
Also, in wordpress I added the following to wp-config.php
define('ADMIN_COOKIE_PATH', '/');
define('COOKIE_DOMAIN', '');
define('COOKIEPATH', '');
define('SITECOOKIEPATH', '');
I might wait for nginx pagespeed code base to stabilize more before revisiting using it again with wordpress multisite when using a network plugin such as network+.
NOTES
The redirect loop only impacted the main site in multisite wordpress with the network+ plugin installed. The other sites in the multisite were fine.
The problem only occurred with thumbnail services (and only with some of them), which made me initially think someone was using these services for attacks.
The website thumbnail service was caching pages (even error pages), so it was very difficult to debug and test possible fixes.
It's interesting that the redirect loops contained a single / for the http/https redirect addresses i.e. ,%20https:/ rather than ,%20:https//.

Related

Nginx default page and root webserver directive

I have a small embedded Linux device running Nginx. I can connect to it over the network and access the endpoints on a PC in Chrome or Firefox. My default page contains an HTML tag that points to "loading.jpeg", which is on the device at /tmp/nginx/loading.jpeg. I can type in the browser: http://192.168.0.4/loading.jpeg and see my image. I can also visit the endpoint that renders html and see my image rendered properly.
Now I want to be able to visit the root page: http://192.168.0.4/ in a browser and redirect that to my default page that should render the html and show the image. The problem is that if I set a page for the default "/" location, my webserver root directive pointing to /tmp/nginx no longer works. So I get my page displayed, but the loading.jpeg image is not found. I've tried redirecting the root request to my default page, but that also breaks the webserver root.
How can I render a default webpage for Nginx, while also having my webserver root honored? Thank you.
This does not work ( webserver root is broken - though expected default webpage is shown ):
location / {
default_type text/html;
content_by_lua_file /sbin/http/serve_stream.lua;
## The streaming endpoint
location /streaming {
default_type text/html;
content_by_lua_file /sbin/http/serve_stream.lua;
}
Here is my current nginx.conf without a redirect:
## Setup server to handle URI requests
server {
# Setup the root
root /tmp/nginx;
## Port
listen 80; ## Default HTTP
## Android phones from Ice Cream Sandwich will try and get a response from
server_name
clients3.google.com
clients.l.google.com
connectivitycheck.android.com
apple.com
captive.apple.com;
## We want to allow POSTing URI's with filenames with extensions in them
## and nginx does not have a "NOT MATCH" location rule - so we catch all
## and then selectively disable ones we don't want and proxy pass the rest
location / {
# For Android - Captive Portal
location /generate_204 {
return 204;
}
# For iOS - CaptivePortal
if ($http_user_agent ~* (CaptiveNetworkSupport) ) {
return 200;
}
## Raw WebSocket
location /ws {
lua_socket_log_errors off;
lua_check_client_abort on;
default_type text/html;
content_by_lua_file /sbin/http/websocket.lua;
}
## The streaming endpoint
location /streaming {
default_type text/html;
content_by_lua_file /sbin/http/serve_stream.lua;
}
## We can have file extensions in POSTing of /blahendpoints for filesystem
## control HTTP commands
location ~ "\.(txt|bin)$" {
...
}
}
}
There are a number of solutions. An exact match location block with a rewrite ... last is quite efficient:
location = / {
rewrite ^ /some.html last;
}
See this document for more.

Nginx redirect all media (.mp3 , image , pdf) to MaxCDN origin poll

I want to redirect all my media files to maxcdn origin poll, this might be duplicate question
but couldn't find the way i was looking for.
example:
If visitor request something like : http://domain.com/monday/day1.mp3 it needs to redirect to http://xxx.netdna.com/monday/day1.mp3.
Question: --- No To This :/
Will nginx allow .htaccess to do this job?
Or
Do I need to setup server config? How
Here is my config, which is every simple.
server {
listen 80; ## listen for ipv4; this line is default and implied
#listen [::]:80 default ipv6only=on; ## listen for ipv6
root /var/www/domain/public_html;
index index.html index.php index.htm;
# Make site accessible from http://localhost/
server_name domain.com;
return 301 http://XXX.YYYY.netdna-cdn.com$request_uri;
The page isn't redirecting properly
Please Help!
Will nginx allow .htaccess to do this job?
No. There is no .htaccess analogue for nginx.
This will redirect all request to folder /podcast/ to xxx.netdna.com.
location /podcast/ {
return 301 http://xxx.netdna.com$request_uri;
}
I have found solution for my problem after a huge dig search on google, but the answer was so simple, didn't realized till it turned on working perfectly for my problem... :)
All I need to add was, following code right after location /
Here is what i added and worked like charm.
try_files $uri $uri/ /index.php;

node.js with nginx, how to remove direct ip:port access

I inherited a node.js project and I am very new to the platform/language.
The application I inherited is in development so it is a work in progress. In its current state it runs off port 7576 so you access it this way: server_ip:7576
I've been tasked with putting this "prototype" on a live server so my boss can show it to investors etc. But I have to password protect it.
So what I did is I got it running on the live server. And then I made it use a nginx vhost like this:
server {
listen 80;
auth_basic "Restricted";
auth_basic_user_file /usr/ssl/htpasswd;
access_log /etc/nginx/logs/access/wip.mydomain.com.access.log;
error_log /etc/nginx/logs/error/wip.mydomain.com.error.log;
server_name wip.mydomain.com;
location / {
proxy_pass http://127.0.0.1:7576;
root /var/app;
expires 30d;
#uncomment this is you want to name an index file:
#index index.php index.html;
access_log off;
}
location ~* ^.+\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|pdf|txt|tar|wav|bmp|rtf|js|flv|swf|html|htm)$ {
root /var/app/public;
}
}
`
This got the job done, I can now access my app by going to wip.mydomain.com
And I can easily password protect it via nginx.
My problem is the app is still accessible via the ip:port and I don't know how to prevent that.
Any help is appreciated.
Thanks
In your node javascript code, you need to explicitly bind to the loopback IP:
server.listen(7576, '127.0.0.1');
(You are looking for a call to .listen(<port>) to fix. The variable may be called app or something else though).
Any IP address starting with 127. is a loopback address that can only be accessed within a single machine (doesn't actually use the network).

Connect cakephp 2.4 with redis on centos 6.4 and nginx

Well first of all I'm sorry for my bad english, and I hopefully to be answered to my questions.
Good I'll do my question and maybe it's quite complicated, but I'am novice in that topic.
I have an application working on cakephp version 2.4 working on centos 6.4 64 bits with nginx server and working nice.
Now I need to integrate a part of my application with node.js and there is a problem, it's because I need to know the session (File cache) of my cakephp, and I was reading/trying over all the day about to configure each other application and I tried first reading the session files directly only to make a test but I knew it that it wasn't correct because is very insecure and also it's difficult to parse the data and the last reason was it's node.js can't match that user connected to my cakephp application is the same of node.js.
Then reading more about it I saw it's possible to match each application if I use memcached or redis cache and I tried to install memcached and after redis on centos, everything was ok during install but when I tried to put on my cakephp application with this:
Core.php
$engine = 'Redis';
bootstrap.php
Cache::config('default', array('engine' => 'Redis'));
Always cakephp give me the next error:
16:58:57 Error: [CacheException] Cache engine default is not properly configured.
Stack Trace:
0 /var/www/public_html/project/lib/Cake/Cache/Cache.php(151): Cache::_buildEngine('default')
1 /var/www/public_html/project/app/Config/bootstrap.php(28): Cache::config('default', Array)
2 /var/www/public_html/project/lib/Cake/Core/Configure.php(92): include('/var/www/public...')
3 /var/www/public_html/project/lib/Cake/bootstrap.php(177): Configure::bootstrap(true)
4 /var/www/public_html/project/app/webroot/index.php(92): include('/var/www/public...')
5 {main}
And I'm not sure if on nginx I need to configure something about Redis ( About it memcached it was happen the same on cakephp).
On nginx I had the next config:
user nginx;
worker_processes 2;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
events {
worker_connections 4000;
# essential for linux, optmized to serve many clients with each thread
use epoll;
# Accept as many connections as possible, after nginx gets notification about
#a new connection.
# May flood worker_connections, if that option is set too low.
multi_accept on;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
sendfile on;
# Caches information about open FDs, freqently accessed files.
# Changing this setting, in my environment, brought performance up from 560k req/sec, to 904k req/sec.
# I recommend using some varient of these options, though not the specific values listed below.
open_file_cache max=200000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
server {
listen 80;
server_name dev.project;
client_max_body_size 2m;
access_log /var/log/nginx/cakeapp.access.log;
error_log /var/log/nginx/cakeapp.error.log;
rewrite_log on;
root /var/www/public_html/project/app/webroot;
index index.php;
# Not found this on disk?
# Feed to CakePHP for further processing!
if (!-e $request_filename) {
rewrite ^/(.+)$ /index.php?url=$1 last;
break;
}
# Pass the PHP scripts to FastCGI server
# listening on 127.0.0.1:9000
location ~ \.php$ {
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_intercept_errors on; # to support 404s for PHP files no$
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
# Deny access to .htaccess files,
# git & svn repositories, etc
location ~ /(\.ht|\.git|\.svn) {
deny all;
}
}
# Compression. Reduces the amount of data that needs to be transferred over
# the network
gzip on;
gzip_min_length 10240;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/xml;
gzip_disable "MSIE [1-7]\.";
}
Please any tip what I need to do to connect session of cakephp with Redis?
One more thing I tried redis with cli and was working well making a test with set and get and was working ok.
Thanks in advanced.
PD. All of that project is working on a virtual box machine
I just ran into this. There is one of two things going on. I am not sure that you can configure the cache for Redis with It looks like your configuration is incomplete. I think you need the server parameter. Try copying the Cache config for memcached and updating it for Redis. The other problem could be that Redis isn't running. This was the final issue for me. Start Redis and test again.

Nginx and php-fpm: cannot get rid of 502 and 504 errors

I have an ubuntu-server and a pretty high loaded website. Server is:
Dedicated to nginx, uses php-fpm (no apache), mysql is located on different machine
Has 8 GB of RAM
Gets about 2000 requests per second.
Each php-fpm process consumes about 65MB of RAM, according to top command:
Free memory:
admin#myserver:~$ free -m
total used free shared buffers cached
Mem: 7910 7156 753 0 284 2502
-/+ buffers/cache: 4369 3540
Swap: 8099 0 8099
PROBLEM
Lately, I'm experiencing big performance problems. Very big response times, very many Gateway Timeouts and in evenings, when load gets high, 90% of the users just see "Server not found" instead of the website (I cannot seem to reproduce this)
LOGS
My Nginx error log is full of the fallowing messages:
2012/07/18 20:36:48 [error] 3451#0: *241904 upstream prematurely closed connection while reading response header from upstream, client: 178.49.30.245, server: example.net, request: request: "GET /readarticle/121430 HTTP/1.1", upstream: "fastcgi://127.0.0.1:9001", host: "example.net", referrer: "http://example.net/articles"
I've tried switching to unix socket, but still get those errors:
2012/07/18 19:27:30 [crit] 2275#0: *12334 connect() to unix:/tmp/fastcgi.sock failed (2: No such file or directory) while connecting to upstream, client: 84.
237.189.45, server: example.net, request: "GET /readarticle/121430 HTTP/1.1", upstream: "fastcgi://unix:/tmp/fastcgi.sock:", host: "example.net", referrer: "http
://example.net/articles"
And php-fpm log is full of these:
[18-Jul-2012 19:23:34] WARNING: [pool www] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 32 children, there are 0 idle, and 75 total children
I've tried to increase given parameters up to 100, but it still seems not enough.
CONFIGS
Here is my current configuration
php-fpm
listen = 127.0.0.1:9001
listen.backlog = 4096
pm = dynamic
pm.max_children = 130
pm.start_servers = 40
pm.min_spare_servers = 10
pm.max_spare_servers = 40
pm.max_requests = 100
nginx
worker_processes 4;
worker_rlimit_nofile 8192;
worker_priority 0;
worker_cpu_affinity 0001 0010 0100 1000;
error_log /var/log/nginx_errors.log;
events {
multi_accept off;
worker_connections 4096;
}
http {
include mime.types;
default_type application/octet-stream;
access_log off;
sendfile on;
keepalive_timeout 65;
gzip on;
# fastcgi parameters
fastcgi_connect_timeout 120;
fastcgi_send_timeout 180;
fastcgi_read_timeout 1000;
fastcgi_buffer_size 128k;
fastcgi_buffers 4 256k;
fastcgi_busy_buffers_size 256k;
fastcgi_temp_file_write_size 256k;
fastcgi_intercept_errors on;
client_max_body_size 128M;
server {
server_name example.net;
root /var/www/example/httpdocs;
index index.php;
charset utf-8;
error_log /var/www/example/nginx_error.log;
error_page 502 504 = /gateway_timeout.html;
# rewrite rule
location / {
if (!-e $request_filename) {
rewrite ^(.*)$ /index.php?path=$1 last;
}
}
location ~* \.php {
fastcgi_pass 127.0.0.1:9001;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_script_name;
include fastcgi_params;
}
}
}
I would be very grateful for any advice on how to identify the problem and what parameters I can adjust to fix this. Or maybe 8GB of RAM is just not enough for this kind of load?
A number of issues. Still worth to fix them with such a busy site. MySQL may be the root cause for now. But longer term you need to do more work.
Caching
I see one of your error msg showing a get request to the php upstream. This doesn't look good with such a high traffic site (2000 r/s as you mentioned). This page (/readarticle/121430) seems a perfectly cacheable page. For one, you can use nginx for caching such pages. Check out fastcgi cache
GET /readarticle/121430
php-fpm
pm.max_requests = 100
The value means that a process will be killed by php-fpm master after serving 100 requests. php-fpm uses that value to fight against 3rd party memory leaks. Your site is very busy, with 2000r/s. Your max child processes is 130, each can only serve at most 100 requests. That means after 13000/2000 = 6.5 seconds all of them are going to be recycled. This is way too much (20 processes killed every second). You should at least start with a value of 1000 and increase that number as long as you don't see memory leak. Someone uses 10,000 in production.
nginx.conf
Issue 1:
if (!-e $request_filename) {
rewrite ^(.*)$ /index.php?path=$1 last;
}
should be replaced by more efficient try_files:
try_files $uri /index.php?path=$uri;
You save an extra if location block and a regex rewrite rule match.
Issue 2: using unix socket will save you more time than using ip (around 10-20% from my experience). That's why php-fpm is using it as default.
Issue 3: You may be interested in setting up keepalive connections between nginx and php-fpm. An example is given here in nginx official site.
I need to see your php.ini settings and I don't think this is related to MySQL since you're getting socket errors it looks like. Also, is this something that begins to start happening after a period of time or does it immediately happen when the server restarts?
Try restarting the php5-fpm daemon and see what happens while tailing your error log.
Check your php.ini file and also all your fastcgi_params typically located in /etc/nginx/fastcgi_params. There are a ton of examples for what you're trying to do.
Also, do you have the apc php caching extension enabled?
It will look like this in your php.ini file if your on a lamp stack:
extension=apc.so
....
apc.enabled=0
Probably wouldn't hurt to do some mysql connection load testing from the command line as well and see what the results are.
Setting up nginx microcache would help as well.
Which will serve the same response for a few seconds.
http://seravo.fi/2013/optimizing-web-server-performance-with-nginx-and-php
has some good info on nginx performance.
Personally followed that and i'm quite happy.
for the sake of having an answer for this question:
You should check your MySQL server. Probably it's overloaded or it limits count of parallel MySQL connections. You should find the bottleneck. And according to your top screenshot it doesn't look like either RAM or CPU, then it's most likely I/O. - #VBrat
Things you might want to do in the future :
1- Increase your RAM size.
2- use cache. see this article on how cache can speed up your site
3- reduce the number of queries that are executed.
Setup the APC extention for PHP (check/configure)
MySQL - Check configuration, indexs, slow queries
Install and configure Varnish. This can cache page requests and be quite useful in reducing the amount of php requests and mysql queries you need to make. It can be tricky with cookies/ssl but otherwise not too difficult and very worthwhile to get running

Resources