Varnish seems to be doing gunzip a lot - varnish

I would like your help to clarify this n_gunzip on my varnish setup.
These are my stats for one server that is running a couple of websites.
34837 0.00 0.50 cache_hit - Cache hits
1022 0.00 0.01 cache_hitpass - Cache hits for pass
4672 0.00 0.07 cache_miss - Cache misses
2175 . . n_expired - N expired objects
85 0.00 0.00 n_gzip - Gzip operations
3512 0.00 0.05 n_gunzip - Gunzip operations
The problem is I am seeing what I think is a lot of gunzips, about 7% of all hits. I really do not believe users would be accessing my websites with browsers that do not support gzip so I cannot understand why the gunzips are happenning.
All I have related to encoding on my VCL is the following:
sub vcl_recv {
if (req.http.Accept-Encoding) {
if (req.http.Accept-Encoding ~ "gzip") {
# If the browser supports it, we'll use gzip.
set req.http.Accept-Encoding = "gzip";
}
else if (req.http.Accept-Encoding ~ "deflate") {
# Next, try deflate if it is supported.
set req.http.Accept-Encoding = "deflate";
}
else {
# Unknown algorithm. Remove it and send unencoded.
unset req.http.Accept-Encoding;
}
}
...
Is my Varnish behaving correctly? Is this normal behavior?

Gunzip is pretty cheap, so I wouldn't worry about this for your traffic levels.
If this is a Varnish 3.0 server, you can safely remove the Accept-Encoding scrubbing in vcl_recv. This will be done behind the scenes by Varnish itself.
As for a root cause, my guess is that this is your service monitoring probe that forgets to set Accept-Encoding: gzip. Your probe URL (front page/favicon/probe.txt) is stored gzip-ed inside Varnish, and these repeated checks are skewing your gzip rate.

Related

Kernel Build Caching/Nondeterminism

I run a CI server which I use to build a custom linux kernel. The CI server is not powerful and has a time limit of 3h per build. To work within this limit, I had the idea to cache kernel builds using ccache. My hope was that I could create a cache once every minor version release and reuse it for the patch releases e.g. I have a cache I made for 4.18 which I want to use for all 4.18.x kernels.
After removing the build timestamps, this works great for the exact kernel version I am building for. For the 4.18 kernel referenced above, building that on the CI gives the following statistics:
$ ccache -s
cache directory
primary config
secondary config (readonly) /etc/ccache.conf
stats zero time Thu Aug 16 14:36:22 2018
cache hit (direct) 17812
cache hit (preprocessed) 38
cache miss 0
cache hit rate 100.00 %
called for link 3
called for preprocessing 29039
unsupported code directive 4
no input file 2207
cleanups performed 0
files in cache 53652
cache size 1.4 GB
max cache size 5.0 GB
Cache hit rate of 100% and an hour to complete the build, fantastic stats and as expected.
Unfortunately, when I try to build 4.18.1, I get
cache directory
primary config
secondary config (readonly) /etc/ccache.conf
stats zero time Thu Aug 16 10:36:22 2018
cache hit (direct) 0
cache hit (preprocessed) 233
cache miss 17658
cache hit rate 1.30 %
called for link 3
called for preprocessing 29039
unsupported code directive 4
no input file 2207
cleanups performed 0
files in cache 90418
cache size 2.4 GB
max cache size 5.0 GB
That's a 1.30% hit rate and the build time reflects this poor performance. That from only a single patch version change.
I would have expected the caching performance to degrade over time but not to this extent, so my only thought is that there is more non-determinism than simply the timestamp. For example, are most/all of the source files including the full kernel version string? My understanding is that something like that would break the caching completely. Is there a way to make the caching work as I'd like it to or is it impossible?
There is include/generated/uapi/linux/version.h header (generated in the top Makefile https://elixir.bootlin.com/linux/v4.16.18/source/Makefile)
which includes exact kernel version as macro:
version_h := include/generated/uapi/linux/version.h
old_version_h := include/linux/version.h
define filechk_version.h
(echo \#define LINUX_VERSION_CODE $(shell \
expr $(VERSION) \* 65536 + 0$(PATCHLEVEL) \* 256 + 0$(SUBLEVEL)); \
echo '#define KERNEL_VERSION(a,b,c) (((a) << 16) + ((b) << 8) + (c))';)
endef
$(version_h): $(srctree)/Makefile FORCE
$(call filechk,version.h)
$(Q)rm -f $(old_version_h)
So, version.h for linux 4.16.18 will be generated like (266258 is (4 << 16) + (16 << 8) + 18 = 0x41012)
#define LINUX_VERSION_CODE 266258
#define KERNEL_VERSION(a,b,c) (((a) << 16) + ((b) << 8) + (c))
Later, for example in module building there should be way to read LINUX_VERSION_CODE macro value https://www.tldp.org/LDP/lkmpg/2.4/html/lkmpg.html (4.1.6. Writing Modules for Multiple Kernel Versions)
The way to do this to compare the macro LINUX_VERSION_CODE to the macro KERNEL_VERSION. In version a.b.c of the kernel, the value of this macro would be 2^{16}a+2^{8}b+c. Be aware that this macro is not defined for kernel 2.0.35 and earlier, so if you want to write modules that support really old kernels
How version.h is included? The sample module includes <linux/kernel.h> <linux/module.h> and <linux/modversions.h>, and one of these files probably indirectly includes global version.h. And most or even all kernel sources will include version.h.
When your build timestamps were compared, version.h may be regenerated and disables ccache. When timestamps are ignored, LINUX_VERSION_CODE is same only for exactly same linux kernel version, and it is changed for next patchlevel.
Update: Check gcc -H output of some kernel object compilation, there will be another header with full kernel version macro definition. For example: include/generated/utsrelease.h (UTS_RELEASE macro), include/generated/autoconf.h (CONFIG_VERSION_SIGNATURE).
Or even do gcc -E preprocessing of same kernel object compilation between two patchlevels and compare the generated text. With simplest linux module I have -include ./include/linux/kconfig.h directly in gcc command line, and its includes include/generated/autoconf.h (but this is not visible in -H output, is it bug or feature of gcc?).
https://patchwork.kernel.org/patch/9326051/
... because the top Makefile forces to include it with:
-include $(srctree)/include/linux/kconfig.h
It actually does: https://elixir.bootlin.com/linux/v4.16.18/source/Makefile
# Use USERINCLUDE when you must reference the UAPI directories only.
USERINCLUDE := \
-I$(srctree)/arch/$(SRCARCH)/include/uapi \
-I$(objtree)/arch/$(SRCARCH)/include/generated/uapi \
-I$(srctree)/include/uapi \
-I$(objtree)/include/generated/uapi \
-include $(srctree)/include/linux/kconfig.h
# Use LINUXINCLUDE when you must reference the include/ directory.
# Needed to be compatible with the O= option
LINUXINCLUDE := \
-I$(srctree)/arch/$(SRCARCH)/include \
-I$(objtree)/arch/$(SRCARCH)/include/generated \
$(if $(KBUILD_SRC), -I$(srctree)/include) \
-I$(objtree)/include \
$(USERINCLUDE)
LINUXINCLUDE is exported to env and used in source/scripts/Makefile.lib to define compiler flags https://elixir.bootlin.com/linux/v4.16.18/source/scripts/Makefile.lib
c_flags = -Wp,-MD,$(depfile) $(NOSTDINC_FLAGS) $(LINUXINCLUDE)

Nginx force download specific extensions

I'm using this code inside server block to force download mp3 files
location ~ /mp3folder/.+\.mp3$ {
types {
application/octet-stream;
}
}
I want to specify multiple extensions like mp4, wmv, flv ... what changes should make for this?
You can use
/mp3folder/.+\.(mp3|mp4|wmv|flv)$

nginx: Deny all files inside directory

I have an "upload" directory where users can upload confidential files (jpg, png, pdf). Each user gets assigned a folder inside upload, ex: /001/, /002/, ..., /999/, etc.
I want these files to be accessible only through SFTP, so the url http://example.com/upload/259/image.jpg should return a 403 error message.
I tried many variations, but still the files can be accessed through the url.
location ~ /upload/\.(jpe?g|png|gif|ico)$ {
deny all;
return 403;
}
Any thoughts?
You still need to match that part: '/259/image'
This should work:
location ~ /upload/.*\.(jpe?g|png|gif|ico)$ {
deny all;
return 403;
}
If access to /upload is only via sftp, then this is all you should need:
location ^~ /download/ {return 403;}
By skipping the regex cycle with ^~ you'll improve performance. Also your configuration will scale with fewer problems by not using a regex location. A prefix location can go anywhere, but not a regex location. The first regex match will be used which can lead to confusion down the road.

PDO Fried my Websites - Any tips?

I've spent the last several months upgrading my websites, including upgrading my database queries to PDO. I published a test query online a while ago and got an error message, so I checked with my webhost before wading into all the technical fixes I Googled. He said the problem was simple: PDO wasn't installed on my server.
So he installed it - and all my websites crashed.
I checked back, and another tech told me there's a conflict between PDO and a line in my .htaccess files -
php_flag magic_quotes_gpc Off
So I commented out that line. That restored things to a point, but I'm now getting this message...
Warning: include(/home/geobear2/public_html/2B/dbc.php) [function.include]: failed to open stream: Permission denied in /home/symbolos/public_html/1A/ACE.php on line 67
dbc.php is simply an included file with my database connection; all my websites include it from the main site. I checked, and it's where it's supposed to be. Actually, I get a similar error with a second included page. And here's an additional error:
Warning: include() [function.include]: Failed opening '/home/geobear2/public_html/2B/dbc.php' for inclusion (include_path='.:/usr/lib/php:/usr/local/lib/php') in /home/symbolos/public_html/1A/ACE.php on line 67
Does anyone have any idea what's going on here? Can PDO somehow disrupt include links between websites? I'm totally confused. Thanks.
P.S. I downloaded the online file that includes the database connection file. Here's the relevant code...
$path = $_SERVER['REQUEST_URI'];
$path2 = str_replace('/', '', $path);
$Section = current(explode('/', ltrim($path, '/'), 2)); // Main line
$Sections = array('Introduction', 'Topics', 'World', 'Ecosymbols', 'Glossary', 'Reference', 'Links', 'About', 'Search');
if ( ! in_array($Section, $Sections))
{
// die('Invalid section: ' . $Section);
}
switch (PHP_OS)
{
case 'Linux':
$BaseINC = '/home/geobear2/public_html';
$BaseURL = 'http://www.geobop.org';
break;
case 'Darwin':
// Just some code for my local includes...
break;
default:
break;
}
include ($BaseINC."/2B/dbc.php");

PHP deprecated warnings on Drupal pages despite turning them off in php.ini

I have PHP deprecated errors flooding log files and Drupal status pages like this:
: Function ereg() is deprecated in mysite/includes/file.inc on line 893.
I should be able to turn off E_DEPRECATED errors in my php.ini, but it is having no effect despite being set to:
error_reporting = E_ALL & ~E_DEPRECATED
phpInfo() reports error_reporting master value and local value both 22527.
I did a
grep -R error_reporting
in my document root in the hopes of finding any hard coded error levels and no luck:
./includes/common.inc: // If the # error suppression operator was used, error_reporting will have
./includes/common.inc: if (error_reporting() == 0) {
./modules/system/system.module: 'page arguments' => array('system_error_reporting_settings'),
./modules/system/system.admin.inc:function system_error_reporting_settings() {
./modules/system/system.install: $err = error_reporting(0);
./modules/system/system.install: error_reporting($err);
Nothing that I can see that is supect except possibly the first line in system.install but if I'm right that should turn all errors OFF.
I'm not setting error_reporting in .htaccess, but doing that does not have any effect either.
I'm hoping that there is a solution that doesn't involve hard coding error levels in common.inc (which DOES work, I've tried - but obviously undesirable).
I know the deprecated errors are a result of upgrading to PHP 5.3, but downgrading PHP is not option (new sites are going live now on the same server that have been tested on 5.3, and the sites where these errors occur have 2 months to live). I also cannot upgrade to Drupal versions that play nicely with 5.3 as unfortunately the previous owner haxxed the core modules without documenting his changes.
Version stuff:
PHP 5.3.2-1, Ubuntu 10.04, Drupal 6.13 on one site, 6.5 (!!1!) on the other, Apache 2.2
Did you try editing index.php to be
error_reporting(E_ALL & ~E_DEPRECATED & ~E_USER_DEPRECATED);
require_once './includes/bootstrap.inc';
drupal_bootstrap(DRUPAL_BOOTSTRAP_FULL);
I have used this on my php.ini file and could hide those deprecated errors. Hope it helps you! =)
error_reporting = E_ALL & ~E_DEPRECATED & -E_WARNING
I don't know about disabling error reporting but you can replace all ereg functions by preg_match..!

Resources