We have 2 Apache servers serving different content and make use of virtual hosts. On one server we have a blog, wiki and forum, and on the other we have a helpdesk and static page. We currently have one squid reverse proxy on a 3rd server in front of both.
We are looking at replacing squid reverse proxy with varnish.
I have been unable to find anything that works, and varnish, apache and namebased virtual host with own ip addresses does not work.
It is a Centos 6 server we are installing varnish on.
Does anyone have any configurations that possibly work?
EDIT TO ADD:
Ok Finally figured it out. Below please find a complete script, for posterity.
On server1 is:
registration.test.co.za
oldforum.test.co.za
On Server2 is:
forum.test.co.za
blog.test.co.za
acl internal_net {
"localhost";
"192.168.1.0"/24;
}
backend server1 {
.host = "192.168.1.101";
.port = "80";
}
backend server2 {
.host = "192.168.1.102";
.port = "80";
}
# Respond to incoming requests
sub vcl_recv {
######BACKENDS#####################
#
#SERVER1
#
if (req.http.host == "registration.test.co.za$") {
set req.backend = server1;
} else if (req.http.host ~ "oldforum.test.co.za$") {
set req.backend = server1;
#
#SERVER2
#
} else if (req.http.host ~ "forum.test.co.za$") {
set req.backend = server2;
} else if (req.http.host ~ "blog.test.co.za$") {
set req.backend = server2;
}
# Allow purge only from internal users
if (req.request == "PURGE") {
if (!client.ip ~ internal_net) {
error 405 "Not allowed.";
}
return (lookup);
}
# Non-RFC2616 or weird requests
if (req.request != "GET" &&
req.request != "HEAD" &&
req.request != "PUT" &&
req.request != "POST" &&
req.request != "TRACE" &&
req.request != "OPTIONS" &&
req.request != "DELETE") {
return (pass);
}
}
The post you mentioned that "does not work" seems perfectly fine. You just make 2 backends in the varnish configuration and then select backend based on host requested.
If you can not get it to work you would need to post details of your setup and the configuration that does not work to get further help.
Related
I am using cloudfront caching for several weeks now (7 days caching).
Since then, I have several pages for which the mobile version is displayed to desktop users, as if the cache was storing a mobile version for every users.
Here is the terraform configuration of cloudfront:
resource "aws_cloudfront_cache_policy" "proxy_hubspot_cache_policy" {
name = "custom-caching-policy"
comment = "Our caching policy for the Cloudfront proxy"
default_ttl = 604800 # seven day of cache
max_ttl = 604800
min_ttl = 604800
parameters_in_cache_key_and_forwarded_to_origin {
enable_accept_encoding_brotli = true
enable_accept_encoding_gzip = true
cookies_config {
cookie_behavior = "none"
}
headers_config {
header_behavior = "none"
}
query_strings_config {
query_string_behavior = "all"
}
}
}
resource "aws_cloudfront_origin_request_policy" "proxy_hubspot_request_policy" {
name = "custom-request-policy-proxy"
cookies_config {
cookie_behavior = "all"
}
headers_config {
header_behavior = "allViewer"
}
query_strings_config {
query_string_behavior = "all"
}
}
resource "aws_cloudfront_distribution" "proxy_cdn" {
enabled = true
price_class = "PriceClass_100"
origin {
origin_id = local.workspace["cdn_proxy_origin_id"]
domain_name = local.workspace["cdn_domain_name"]
custom_header {
name = "X-HubSpot-Trust-Forwarded-For"
value = "true"
}
custom_header {
name = "X-HS-Public-Host"
value = local.workspace["destination_url"]
}
custom_origin_config {
origin_protocol_policy = "https-only"
http_port = "80"
https_port = "443"
origin_ssl_protocols = ["TLSv1", "TLSv1.1", "TLSv1.2"]
}
}
default_cache_behavior {
viewer_protocol_policy = "redirect-to-https"
allowed_methods = ["GET", "HEAD", "OPTIONS", "PUT", "POST", "PATCH", "DELETE"]
cached_methods = ["GET", "HEAD"]
target_origin_id = local.workspace["cdn_proxy_origin_id"]
cache_policy_id = aws_cloudfront_cache_policy.proxy_hubspot_cache_policy.id
origin_request_policy_id = aws_cloudfront_origin_request_policy.proxy_hubspot_request_policy.id
compress = true
}
logging_config {
include_cookies = true
bucket = data.terraform_remote_state.shared_infra.outputs.cloudfront_logs_s3_bucket_url
prefix = "proxy_${local.workspace["env_type"]}"
}
restrictions {
geo_restriction {
restriction_type = "none"
}
}
viewer_certificate {
acm_certificate_arn = aws_acm_certificate.proxy_certificate.arn
ssl_support_method = "sni-only"
}
aliases = [local.workspace["destination_url"]]
depends_on = [
aws_acm_certificate_validation.proxy_certificate_validation
]
}
resource "aws_cloudfront_monitoring_subscription" "monitor_www_proxy" {
distribution_id = aws_cloudfront_distribution.proxy_cdn.id
monitoring_subscription {
realtime_metrics_subscription_config {
realtime_metrics_subscription_status = "Enabled"
}
}
}
Any idea what can be wrong in the configuration?
Thanks a lot
I believe the easiest way to get CloudFront to cache mobile pages separately from desktop pages is to configure the CloudFront-Is-Mobile-Viewer and CloudFront-Is-Desktop-Viewer headers as part of the cache key. Note all the headers that are available there, if you also want separate cache for table viewers, or iOS and Android caches, etc.
The Terraform config would look like:
resource "aws_cloudfront_cache_policy" "proxy_hubspot_cache_policy" {
name = "custom-caching-policy"
comment = "Our caching policy for the Cloudfront proxy"
default_ttl = 604800 # seven day of cache
max_ttl = 604800
min_ttl = 604800
parameters_in_cache_key_and_forwarded_to_origin {
enable_accept_encoding_brotli = true
enable_accept_encoding_gzip = true
cookies_config {
cookie_behavior = "none"
}
headers_config {
header_behavior = "whitelist"
headers {
items = ["CloudFront-Is-Mobile-Viewer", "CloudFront-Is-Desktop-Viewer"]
}
}
query_strings_config {
query_string_behavior = "all"
}
}
}
Note that these headers will also be passed to your backend origin after you implement this configuration, so you could change the logic of your application to render mobile vs desktop based on the value of those headers instead of inspecting the user-agent header.
Terraform version : 11.11
I am working on creating a custom config rule resource using below code,
however the compliance_resource_types is getting set to
["AWS::EC2::Instance"] instead of ["AWS::EC2::SecurityGroup"].
Appreciate if someone can guide me on how to proceed.
`resource "aws_config_config_rule" "remove_sg_open_to_world" {
name = "security_group_not_open_to_world"
description = "Rule to remove SG ports if open to public"
source {
owner = "CUSTOM_LAMBDA"
source_identifier = "arn:aws:lambda:${var.current_region}:xxxxxxxxx:function:remove_sg_open_to_world"
source_detail {
message_type = "ConfigurationItemChangeNotification"
}
}
scope {
compliance_resource_types = ["AWS::EC2::SecurityGroup"]
}
depends_on = ["aws_config_configuration_recorder.config"]
I have a server running Plex and two other services I want to monitor with Icinga2 and for the life of me I can't figure out how to get that to work. If I run the following command:
./check_procs -c 1:1 -a '/usr/lib/plexmediaserver/Plex Media Server'
Which returns the following when I manually kill Plex:
PROCS CRITICAL: 0 processes with args '/usr/lib/plexmediaserver/Plex Media Server' | procs=0;;1:1;0;
I just can't figure out how to add this check to the server.. where do I put it ?
I tried adding another declaration to /etc/icinga2/conf.d/services.conf as follows:
apply Service "procs"
{
import "generic-service"
check_command = "procs"
assign where host.name == NodeName
arguments =
{
"-a" =
{
value = "/usr/lib/plexmediaserver/Plex Media Server"
description = "service name"
required = true
}
}
}
But then the agent wouldn't start at all.
I solved this by defining a service:
apply Service for (service => config in host.vars.processes_linux) {
import "generic-service"
check_command = "nrpe"
display_name = config.display_name
vars.nrpe_command = "check_process"
vars.nrpe_arguments = [ config.process, config.warn_range, config.crit_range ]
}
In the host definition I then just add a config, let's say for mongodb:
vars.processes_linux["trench-srv-lin-process-mongodb"] = {
display_name = "MongoDB processes"
process = "mongod"
warn_range = "1:"
crit_range = "1:"
}
On the remote host, I need to install the package nagios-nrpe-server
And in the configfile /etc/nagios/nrpe_local.cfg I add this line:
command[check_procs]=/usr/lib/nagios/plugins/check_procs -w $ARG1$ -c $ARG2$ -s $ARG3$
i am running a small cluster of Raspberry Pi's which i am monitoring with Icinga2. On the master node of my cluster i have a dhcp server running. I check it's status the following way.
First i downloaded the check service status plugin from the Icinga Exchange, made it executable and moved it to /usr/lib/nagios/plugins (your path may differ).
Then i defined a check command for it:
object CheckCommand "Check Service" {
import "plugin-check-command"
command = [ PluginDir + "/check_service.sh" ]
arguments += {
"-o" = {
required = true
value = "$check_service_os$"
}
"-s" = {
required = true
value = "$check_service_name$"
}
}
}
Now all that was left was defining a Service:
object Service "Check DHCP" {
host_name = "Localhost"
check_command = "Check Service"
enable_perfdata = true
event_command = "Restart DHCP"
vars.check_service_name = "isc-dhcp-server"
vars.check_service_os = "linux"
}
As a Bonus you can even define a event command that restarts your service:
object EventCommand "Restart DHCP" {
import "plugin-event-command"
command = [ "/usr/bin/sudo", "systemctl", "restart" ]
arguments += {
"(no key)" = {
skip_key = true
value = "$check_service_name$"
}
}
vars.check_service_name = "isc-dhcp-server"
}
But for this to work, you have to give your nagios user (or whatever user runs your icinga service) sudo privileges to restart services. Add this line to your sudoers file:
nagios ALL = (ALL) NOPASSWD: /bin/systemctl restart *
I hope this helps you with your problem :-)
Jan
I am using the following Puppet definition to disable IPV6 in windows:
#IPv6 Management
define winconfig::ipv6 (
$ensure,
$state = UNDEF,
) {
include winconfig::params
case $ensure {
'present','enabled': {
case $state {
UNDEF,'all': { $ipv6_data = '0' }
'preferred': { $ipv6_data = '0x20' }
'nontunnel': { $ipv6_data = '0x10' }
'tunnel': { $ipv6_data = '0x01' }
default: { $ipv6_data = '0' }
}
}
'absent','disabled': { $ipv6_data = '0xffffffff' }
default: { fail('You must specify ensure status...') }
}
registry::value{'ipv6':
key => 'hklm\SYSTEM\CurrentControlSet\Services\Tcpip6\Parameters',
value => 'DisabledComponents',
type => 'dword',
data => $ipv6_data,
}
reboot {'ipv6':
subscribe => Registry::Value['ipv6'],
}
}
In Site.pp on the master I am using the follwing to call it from a node:
node 'BMSGITSS1' {
# Disable IPV6
winconfig::ipv6 {
ensure => 'disabled',
}
}
I get the following error when running puppet agent -t
Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Could
not parse for environment production: All resource specifications require names
; expected '%s' at /etc/puppetlabs/puppet/manifests/site.pp:55 on node bmsgitss1
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run
The hint is in the error:
All resource specifications require names; expected '%s'
You need to give it a name:
winconfig::ipv6{"Disable IPv6":
ensure => 'disabled',
}
Using Powershell 1.0 under Windows Server 2003 with IIS 6.
I have about 200 sites that I would like to change the IP address for (as listed in the website properties on the "website" tab in the "Web site identification" section "IP address" field.
I found this code:
$site = [adsi]"IIS://localhost/w3svc/$siteid"
$site.ServerBindings.Insert($site.ServerBindings.Count, ":80:$hostheader")
$site.SetInfo()
How can I do something like this but:
Loop through all the sites in IIS
Not insert a host header value, but change an existing one.
The following PowerShell script should help:
$oldIp = "172.16.3.214"
$newIp = "172.16.3.215"
# Get all objects at IIS://Localhost/W3SVC
$iisObjects = new-object `
System.DirectoryServices.DirectoryEntry("IIS://Localhost/W3SVC")
foreach($site in $iisObjects.psbase.Children)
{
# Is object a website?
if($site.psbase.SchemaClassName -eq "IIsWebServer")
{
$siteID = $site.psbase.Name
# Grab bindings and cast to array
$bindings = [array]$site.psbase.Properties["ServerBindings"].Value
$hasChanged = $false
$c = 0
foreach($binding in $bindings)
{
# Only change if IP address is one we're interested in
if($binding.IndexOf($oldIp) -gt -1)
{
$newBinding = $binding.Replace($oldIp, $newIp)
Write-Output "$siteID: $binding -> $newBinding"
$bindings[$c] = $newBinding
$hasChanged = $true
}
$c++
}
if($hasChanged)
{
# Only update if something changed
$site.psbase.Properties["ServerBindings"].Value = $bindings
# Comment out this line to simulate updates.
$site.psbase.CommitChanges()
Write-Output "Committed change for $siteID"
Write-Output "========================="
}
}
}