Well first of all I'm sorry for my bad english, and I hopefully to be answered to my questions.
Good I'll do my question and maybe it's quite complicated, but I'am novice in that topic.
I have an application working on cakephp version 2.4 working on centos 6.4 64 bits with nginx server and working nice.
Now I need to integrate a part of my application with node.js and there is a problem, it's because I need to know the session (File cache) of my cakephp, and I was reading/trying over all the day about to configure each other application and I tried first reading the session files directly only to make a test but I knew it that it wasn't correct because is very insecure and also it's difficult to parse the data and the last reason was it's node.js can't match that user connected to my cakephp application is the same of node.js.
Then reading more about it I saw it's possible to match each application if I use memcached or redis cache and I tried to install memcached and after redis on centos, everything was ok during install but when I tried to put on my cakephp application with this:
Core.php
$engine = 'Redis';
bootstrap.php
Cache::config('default', array('engine' => 'Redis'));
Always cakephp give me the next error:
16:58:57 Error: [CacheException] Cache engine default is not properly configured.
Stack Trace:
0 /var/www/public_html/project/lib/Cake/Cache/Cache.php(151): Cache::_buildEngine('default')
1 /var/www/public_html/project/app/Config/bootstrap.php(28): Cache::config('default', Array)
2 /var/www/public_html/project/lib/Cake/Core/Configure.php(92): include('/var/www/public...')
3 /var/www/public_html/project/lib/Cake/bootstrap.php(177): Configure::bootstrap(true)
4 /var/www/public_html/project/app/webroot/index.php(92): include('/var/www/public...')
5 {main}
And I'm not sure if on nginx I need to configure something about Redis ( About it memcached it was happen the same on cakephp).
On nginx I had the next config:
user nginx;
worker_processes 2;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
events {
worker_connections 4000;
# essential for linux, optmized to serve many clients with each thread
use epoll;
# Accept as many connections as possible, after nginx gets notification about
#a new connection.
# May flood worker_connections, if that option is set too low.
multi_accept on;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
sendfile on;
# Caches information about open FDs, freqently accessed files.
# Changing this setting, in my environment, brought performance up from 560k req/sec, to 904k req/sec.
# I recommend using some varient of these options, though not the specific values listed below.
open_file_cache max=200000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
server {
listen 80;
server_name dev.project;
client_max_body_size 2m;
access_log /var/log/nginx/cakeapp.access.log;
error_log /var/log/nginx/cakeapp.error.log;
rewrite_log on;
root /var/www/public_html/project/app/webroot;
index index.php;
# Not found this on disk?
# Feed to CakePHP for further processing!
if (!-e $request_filename) {
rewrite ^/(.+)$ /index.php?url=$1 last;
break;
}
# Pass the PHP scripts to FastCGI server
# listening on 127.0.0.1:9000
location ~ \.php$ {
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_intercept_errors on; # to support 404s for PHP files no$
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
# Deny access to .htaccess files,
# git & svn repositories, etc
location ~ /(\.ht|\.git|\.svn) {
deny all;
}
}
# Compression. Reduces the amount of data that needs to be transferred over
# the network
gzip on;
gzip_min_length 10240;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/xml;
gzip_disable "MSIE [1-7]\.";
}
Please any tip what I need to do to connect session of cakephp with Redis?
One more thing I tried redis with cli and was working well making a test with set and get and was working ok.
Thanks in advanced.
PD. All of that project is working on a virtual box machine
I just ran into this. There is one of two things going on. I am not sure that you can configure the cache for Redis with It looks like your configuration is incomplete. I think you need the server parameter. Try copying the Cache config for memcached and updating it for Redis. The other problem could be that Redis isn't running. This was the final issue for me. Start Redis and test again.
Related
An API service I built is runnig over HTTPS, which has a user get request parameter too long, and then the browser is directly inaccessible, and the parameters in the URL can be shortened.
The server uses the nodejs local test to see how long the parameters can be processed. Finally, it is possible to be a nginx problem, 'cause setting up client_header_buffer_size and large_client_header_buffers, restarting then nginx, I got the same result.
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
...
client_body_buffer_size 600k;
client_max_body_size 600k;
client_header_buffer_size 600k;
large_client_header_buffers 4 600k;
...
}
Is there anybody in my situation that can provide solutions?
In another case, I used Safari browser to access the URL with a long parameter. Page prompt 303 error? (shortening the web site normal)
From the description of client_max_body_size, you can set allowed size of the client request body. Try to set it to a max value say
client_max_body_size 100M;
or setting it to 0 will remove the check, which is not recommended.
client_max_body_size 0;
I am trying to optimize serving my static bundles generated from my React Webpack app. In this process, I noticed that for same files, when serving the content through an Express server, the file sizes were comparatively lower than when served through nginx.
Here is my express code to serve the bundle:
app.use(express.static(project.paths.dist()));
Here is my nginx config:
server {
listen 80;
root /home/test/dist/;
index index.html index.htm app.js;
server_name www.ranodom.com;
location / {
try_files $uri /index.html;
}
error_log /var/log/nginx/test/website-error_log error;
access_log /var/log/nginx/test/website-access_log;
}
When served through express:
When served through nginx:
As visible from above screenshots, the file sizes differ drastically. The actual file sizes as present in the folder is equal to the one being served from Nginx server.
My question is, what can be the reason for this difference? Does express static optimizes/compresses the served files or is there a catch? If there is so much difference, would it be better to serve these files via express server and routing to index page via nginx?
PS. The above files are already uglified and minified using webpack.
Just realized that the compression middleware was enabled in my express server and hence the reduction in size.
If anyone else stumbles on this post, do notice that you can obtain similar results using nginx by using the below mentioned configs.
gzip on;
gzip_min_length 1000;
gzip_types text/html text/css application/javascript text/javascript text/plain text/xml application/json;
I inherited a node.js project and I am very new to the platform/language.
The application I inherited is in development so it is a work in progress. In its current state it runs off port 7576 so you access it this way: server_ip:7576
I've been tasked with putting this "prototype" on a live server so my boss can show it to investors etc. But I have to password protect it.
So what I did is I got it running on the live server. And then I made it use a nginx vhost like this:
server {
listen 80;
auth_basic "Restricted";
auth_basic_user_file /usr/ssl/htpasswd;
access_log /etc/nginx/logs/access/wip.mydomain.com.access.log;
error_log /etc/nginx/logs/error/wip.mydomain.com.error.log;
server_name wip.mydomain.com;
location / {
proxy_pass http://127.0.0.1:7576;
root /var/app;
expires 30d;
#uncomment this is you want to name an index file:
#index index.php index.html;
access_log off;
}
location ~* ^.+\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|pdf|txt|tar|wav|bmp|rtf|js|flv|swf|html|htm)$ {
root /var/app/public;
}
}
`
This got the job done, I can now access my app by going to wip.mydomain.com
And I can easily password protect it via nginx.
My problem is the app is still accessible via the ip:port and I don't know how to prevent that.
Any help is appreciated.
Thanks
In your node javascript code, you need to explicitly bind to the loopback IP:
server.listen(7576, '127.0.0.1');
(You are looking for a call to .listen(<port>) to fix. The variable may be called app or something else though).
Any IP address starting with 127. is a loopback address that can only be accessed within a single machine (doesn't actually use the network).
I have followed this website http://raspberrypihelp.net/tutorials/24-raspberry-pi-webserver to setup the HTTP server nginx on my Raspberry Pi and try to setup a site call example.com. But when I run sudo service nginx restart, it said
Restarting nginx: nginx: [emerg] unknown directive " " in /etc/nginx/sites-enabled/example.com:3
Here is the code in example.com.
server {
server_name example.com 192.168.1.88;
access_log /srv/www/example.com/logs/access.log;
error_log /srv/www/example.com/logs/error.log;
root /srv/www/example.com/public/;
location / {
index index.php index.html index.htm;
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
include /etc/nginx/fastcgi_params;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /srv/www/example.com/public$fastcgi_script_name;
}
location /phpmyadmin {
root /usr/share/;
index index.php index.html index.htm;
location ~ ^/phpmyadmin/(.+\.php)$ {
try_files $uri =404;
root /usr/share/;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include /etc/nginx/fastcgi_params;
}
location ~* ^/phpmyadmin/(.+\.(jpg|jpeg|gif|css|png|js|ico|html|xml|txt))$ {
root /usr/share/;
}
}
location /phpMyAdmin {
rewrite ^/* /phpmyadmin last;
}
}
I am just following the steps but it can't run successfully.
I had the same problem which was that I copy/pasted the config code from the web and some dirty EOL(end of line) characters where there.
The editor didn't show them, but nginx treated them like a directive.
Just deleted every EOL and added again.
It sounds like you did some copy and paste work here. It's not uncommon to snag some extra characters that are invisible at the end of line (EOL). Try this:
Run your text through this tool:
http://www.textfixer.com/tools/remove-line-breaks.php
then fix any breaks that may have been removed and will be affected by the comments.
This worked for me. Hope it works for you.
It looks like the nginx binary was compiled with --without-http_fastcgi_module option.This is not default. Try donwloading or compiling a different binary.
Try running
nginx -V
(with uppercase V) to see what options were used to compile the nginx.
I edited some text in the mid of the conf file and nginx started showing this error at the starting of the file itself. I copied the contents of the file, created a new file, pasted the contents there and nginx stopped showing this error.
I faced similar issue with error message as "unknown directive 'index.html'" when running 'sudo nginx -t'. After correcting the HTML syntax errors in index.html, the issue was resolved.
Even if you miss a semicolon you will encounter the same error.
// Missed semicolon
fastcgi_pass unix:/var/run/php/php8.0-fpm.sock
// With semicolon
fastcgi_pass unix:/var/run/php/php8.0-fpm.sock;
In my case, I have store configuration file nginx.conf in the github. I have done wget to raw version of code, thus resulted this error.
Later, I have cloned my repository and used the nginx.conf file from clone and issue got resolved.
Really didn't got how that happened,
I've "docker run " my nginx 1.14 (default apt for debian:bullseye) to extract it's default nginx.conf, with intention to update it and than use it into my Dockerfile.
Anyhow, after having read comments in this thread, found this that the file is
"UTF-16LE" ... I'm not really expert, but is not "UTF-8".
solved as:
Issue seen from inside the container:
me#docker-nginx $ head nginx.conf
��
user www-data:qgis;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
error_log /dev/stdout warn;
events {
worker_connections 768;
me#docker-nginx $ dos2unix nginx.conf
dos2unix: converting UTF-16LE file nginx.conf to UTF-8 Unix format...
Solved also on working dir:
IntelliJ IDEA: select "convert" after "UTF-8"
I had the same problem when I looked inside my config file there was a syntax error so this might be the problem check this path
nano /etc/nginx/sites-available/example.com
I have an ubuntu-server and a pretty high loaded website. Server is:
Dedicated to nginx, uses php-fpm (no apache), mysql is located on different machine
Has 8 GB of RAM
Gets about 2000 requests per second.
Each php-fpm process consumes about 65MB of RAM, according to top command:
Free memory:
admin#myserver:~$ free -m
total used free shared buffers cached
Mem: 7910 7156 753 0 284 2502
-/+ buffers/cache: 4369 3540
Swap: 8099 0 8099
PROBLEM
Lately, I'm experiencing big performance problems. Very big response times, very many Gateway Timeouts and in evenings, when load gets high, 90% of the users just see "Server not found" instead of the website (I cannot seem to reproduce this)
LOGS
My Nginx error log is full of the fallowing messages:
2012/07/18 20:36:48 [error] 3451#0: *241904 upstream prematurely closed connection while reading response header from upstream, client: 178.49.30.245, server: example.net, request: request: "GET /readarticle/121430 HTTP/1.1", upstream: "fastcgi://127.0.0.1:9001", host: "example.net", referrer: "http://example.net/articles"
I've tried switching to unix socket, but still get those errors:
2012/07/18 19:27:30 [crit] 2275#0: *12334 connect() to unix:/tmp/fastcgi.sock failed (2: No such file or directory) while connecting to upstream, client: 84.
237.189.45, server: example.net, request: "GET /readarticle/121430 HTTP/1.1", upstream: "fastcgi://unix:/tmp/fastcgi.sock:", host: "example.net", referrer: "http
://example.net/articles"
And php-fpm log is full of these:
[18-Jul-2012 19:23:34] WARNING: [pool www] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 32 children, there are 0 idle, and 75 total children
I've tried to increase given parameters up to 100, but it still seems not enough.
CONFIGS
Here is my current configuration
php-fpm
listen = 127.0.0.1:9001
listen.backlog = 4096
pm = dynamic
pm.max_children = 130
pm.start_servers = 40
pm.min_spare_servers = 10
pm.max_spare_servers = 40
pm.max_requests = 100
nginx
worker_processes 4;
worker_rlimit_nofile 8192;
worker_priority 0;
worker_cpu_affinity 0001 0010 0100 1000;
error_log /var/log/nginx_errors.log;
events {
multi_accept off;
worker_connections 4096;
}
http {
include mime.types;
default_type application/octet-stream;
access_log off;
sendfile on;
keepalive_timeout 65;
gzip on;
# fastcgi parameters
fastcgi_connect_timeout 120;
fastcgi_send_timeout 180;
fastcgi_read_timeout 1000;
fastcgi_buffer_size 128k;
fastcgi_buffers 4 256k;
fastcgi_busy_buffers_size 256k;
fastcgi_temp_file_write_size 256k;
fastcgi_intercept_errors on;
client_max_body_size 128M;
server {
server_name example.net;
root /var/www/example/httpdocs;
index index.php;
charset utf-8;
error_log /var/www/example/nginx_error.log;
error_page 502 504 = /gateway_timeout.html;
# rewrite rule
location / {
if (!-e $request_filename) {
rewrite ^(.*)$ /index.php?path=$1 last;
}
}
location ~* \.php {
fastcgi_pass 127.0.0.1:9001;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_script_name;
include fastcgi_params;
}
}
}
I would be very grateful for any advice on how to identify the problem and what parameters I can adjust to fix this. Or maybe 8GB of RAM is just not enough for this kind of load?
A number of issues. Still worth to fix them with such a busy site. MySQL may be the root cause for now. But longer term you need to do more work.
Caching
I see one of your error msg showing a get request to the php upstream. This doesn't look good with such a high traffic site (2000 r/s as you mentioned). This page (/readarticle/121430) seems a perfectly cacheable page. For one, you can use nginx for caching such pages. Check out fastcgi cache
GET /readarticle/121430
php-fpm
pm.max_requests = 100
The value means that a process will be killed by php-fpm master after serving 100 requests. php-fpm uses that value to fight against 3rd party memory leaks. Your site is very busy, with 2000r/s. Your max child processes is 130, each can only serve at most 100 requests. That means after 13000/2000 = 6.5 seconds all of them are going to be recycled. This is way too much (20 processes killed every second). You should at least start with a value of 1000 and increase that number as long as you don't see memory leak. Someone uses 10,000 in production.
nginx.conf
Issue 1:
if (!-e $request_filename) {
rewrite ^(.*)$ /index.php?path=$1 last;
}
should be replaced by more efficient try_files:
try_files $uri /index.php?path=$uri;
You save an extra if location block and a regex rewrite rule match.
Issue 2: using unix socket will save you more time than using ip (around 10-20% from my experience). That's why php-fpm is using it as default.
Issue 3: You may be interested in setting up keepalive connections between nginx and php-fpm. An example is given here in nginx official site.
I need to see your php.ini settings and I don't think this is related to MySQL since you're getting socket errors it looks like. Also, is this something that begins to start happening after a period of time or does it immediately happen when the server restarts?
Try restarting the php5-fpm daemon and see what happens while tailing your error log.
Check your php.ini file and also all your fastcgi_params typically located in /etc/nginx/fastcgi_params. There are a ton of examples for what you're trying to do.
Also, do you have the apc php caching extension enabled?
It will look like this in your php.ini file if your on a lamp stack:
extension=apc.so
....
apc.enabled=0
Probably wouldn't hurt to do some mysql connection load testing from the command line as well and see what the results are.
Setting up nginx microcache would help as well.
Which will serve the same response for a few seconds.
http://seravo.fi/2013/optimizing-web-server-performance-with-nginx-and-php
has some good info on nginx performance.
Personally followed that and i'm quite happy.
for the sake of having an answer for this question:
You should check your MySQL server. Probably it's overloaded or it limits count of parallel MySQL connections. You should find the bottleneck. And according to your top screenshot it doesn't look like either RAM or CPU, then it's most likely I/O. - #VBrat
Things you might want to do in the future :
1- Increase your RAM size.
2- use cache. see this article on how cache can speed up your site
3- reduce the number of queries that are executed.
Setup the APC extention for PHP (check/configure)
MySQL - Check configuration, indexs, slow queries
Install and configure Varnish. This can cache page requests and be quite useful in reducing the amount of php requests and mysql queries you need to make. It can be tricky with cookies/ssl but otherwise not too difficult and very worthwhile to get running