Behavior of Apache Server can be controlled from a variety of configuration file but they boil down to two locations -
1) a .conf file in the apache installation folder
2) .htaccess file
Can Varnish be configured in a similar way so that it can take up vcl definitions in real time..
Varnish take VCL rules from /etc/varnish/default.vcl¹, you can modify those rules and do sudo service varnish reload to reload this rules without restarting Varnish. That means you won't loose your cache by performing a reload.
Related
I am facing an extremely rare and peculiar problem.
We use Magento 2 in many websites, which uses Varnish almost out of the box. We face problems problems, but they are rare and easily fixable.
Yesterday, we noticed something really strange.
The file /lib/systemd/system/varnish.service is reverting to its default form somehow, without us updating or changing it. By reverting, Varnish stops working (because on its default installation, Varnish is configured on port 6081, but usually everybody changes this to port 80). So the fix is really easy, but it's really frustrating. I saw that on different versions too, both 5 & 6.
Does anybody know if Varnish is autoupdating these files somehow?
Many thanks, I am at your disposal for further explanations.
The fact that /lib/systemd/system/varnish.service is reverted shouldn't really be a problem, because you should have a copy of that file in /etc/systemd/sytem that contains the appropriate values.
If you perform sudo systemctl edit varnish and perform changes, a new file called /etc/systemd/system/varnish.service.d/override.conf will be created.
If you call sudo systemctl edit --full varnish a file called /etc/systemd/sytem/varnish.service will be created.
It's also possible to do this manually by running sudo cp /lib/systemd/system/varnish.service /etc/systemd/system/, but this also requires calling sudo systemctl daemon-reload.
Have a look at the following tutorial, which explains the systemd configuration of Varnish for Ubuntu: https://www.varnish-software.com/developers/tutorials/installing-varnish-ubuntu/#systemd-configuration.
If you're using another distribution you can find the right tutorial on https://www.varnish-software.com/developers/tutorials/#installations.
I'm asking about an issue on a Wordpress website, that serves on an Ubuntu 18.04 behind of a WAF(Web Application Firewall) service.
The server was working for 1 year. 4 days ago I tried to upload a file and I got Http error.
upload_max_filesize and other values on configurations are about 2G.
First of all, I checked VM configs and I found out VM memory reduced to 4G. After increasing the memory amount, I did a check on Vestacp for the PHP, Nginx, and Apache configs, and they had no changes. Then I tried to upload, but this time on the local network and file uploaded successfully!
here are the questions:
Did I miss anything, while I checked the configurations?
Is the WAF the reason of the problem?
Is it a possibility that reducing and increasing the VM Memory amount, causing the problem?
And at last, how can I fix this?
Edited
Can someone explain the Juniper strategy and why it has done this?
Double check your php.ini settings, look for upload_max_filesize and post_max_size. Both of those values affect the max upload size of a form. Also, dont forget to restart your server service after changes are made.
If that does not work, check the phpinfo(); function output to make sure your settings saved and that the appropriate php.ini file is loaded. If so,
check code for ini_set( ... ) functions. If there is one and it is changing it dynamically as the script executes, change it. Or, as a blanket fix, you can use that function and set the post max sizes to what you want. However, you should still investigate what is actually going on.
The problem has solved after bypassing the WAF
Edited Feb 15, 2019
According to the network security administrator investigation on the issue, after a couple of malicious requests to the Server, WAF made the right decision by blocking the rest of them. However, we are trying to find the source of the requests. As a theory, MSN Bot doing this.
Manuals from google indicate that with the configurating of the apache virtual hosts it must be done through the /etc/apache2/site-aviliable(for Debian). But i configurate server via apache.conf, without site-aviliable and site-enable, and my sites work good.
What's the difference between these types of settings and how it will affect the safety / performance/something else?
Sorry for my english.
Basically all the configuration snippets will get merged into one configuration before parsing it, as if you would write them to a single file (like you actually do). So it makes no difference for the performance of apache.
However, if you put configuration values for a single vhost into a separate file, it is easy add or remove vhost by a simple file operation and therefore makes it easy to maintain and automate.
I'm configuring a new Apache 2.4 webserver. I've installed and run Apache many times in the past but always tended to stick to what I learned in the early days when it came to configuration files, in particular I'd usually just have one virtualhosts.conf file somewhere and add all virtual hosts to it, with the first one in the file automatically becoming the default host that was served if someone visited the server just by IP address. Where a server had multiple IPs, then the default host would be the first one in the file configured to answer on that IP.
However with the new box I'm trying to do things "properly", which I gather means that in /etc/apache2/vhosts.d I should create a separate config file for each virtualhosted website. Which works, and is fine so far - but there doesn't seem to be any specific mechanism to specify which one should be the default for each IP on the server, other than them being arranged alphabetically. Ok, I can work round this by naming the one I want to be first 00-name.conf, so it automatically comes first in ASCII, but that seems a bit clunky - is there a formal method for instructing Apache that a specific virtualhost out of the many that may exist for a given IP is to be the default one?
I've tried Googling, and reading the Apache documentation, but not found anything that answers this specific question.
There's nothing in the Apache documentation because that default configuration is a debian/ubuntu invention. In that environment, using numbered config files to determine the default VH for a set of name-based virtual hosts is absolutely a standard thing to do.
I started to optimize my wordpress blog and one of what I need to do is to configure .htaccess file. I have a VPS -> CentOS, so I need to move all my rules from .htaccess to httpd.conf? Also, I use W3C Total Cache for Wordpress, and this plugin inserted some rules in my .htacces file, what I should do with this code?
Thanks and sorry please my Enghlish
Apache best practice is to use your system or vhost config -- if necessary with <Directory> filters -- and to disable .htaccess files if you have (root) access to you system config. The upside here is that there are less pitfalls in using the root config and there is also a performance bonus. The downside is that you will need to restart Apache when you change the configuration.
People (and the Apache docs) often discuss performance hit of using .htaccess files, but in terms of CPU load this is minimal (~1 mSec per file parsed). However, the main benefit is in I/O savings because if you do have .htaccess files enabled then Apache searches the entire path to any requested file for putative .htaccess files. OK, these will all end up in the VFS cache on a dedicated system, but these probes still involve a lot of cache I/O with can add up to ~10 mSec per request even if fully cached.
One other thing to be aware of is the root, vhost and directory rules are by nature single pass whereas the .htaccessprocessing is a while loop which keeps selecting and retrying the deepest .htaccess file until no more rewrite changes are made. And there are subtle syntax differences in regexps and substitute rules.
Independent of all this is the point that Gerben makes about network tuning which I endorse 100% wherever you address these issues.