Sending mail with Symfony (SwiftMail & Gmail) - gmail

I'm trying to use Swiftmailer with Symfony 2.4.
Here is my config.yml :
# This file is auto-generated during the composer install
# parameters:
# mailer_transport: gmail
# mailer_host: smtp.gmail.com
# mailer_user: jules.truong.pro#gmail.com
# mailer_password: XXXXXX
# mailer_port: 465
# locale: fr
# secret: XXXX
And this is parameters.yml
# Swiftmailer Configuration
# swiftmailer:
# transport: %mailer_transport%
# username: %mailer_user%
# password: %mailer_password%
My code is pretty basic :
# $request = $this->get('request');
# $dataSubject = $request->query->get('lbSubject');
# $dataEmail = $request->query->get('lbEmail');
# $dataMessage = $request->query->get('lbMessage');
# //Récupération du service
# $mailer = $this->get('mailer');
#
# // Création de l'e-mail : le service mailer utilise SwiftMailer, donc nous créons une instance de Swift_Message
# $message = \Swift_Message::newInstance()
# ->setSubject($dataSubject)
# ->setFrom($dataEmail)
# ->setTo('julestruonglolilol#email.com')
# ->setBody($dataMessage);
#
# try
# {
# if (!$mailer->send($message, $failures))
# {
# return new Response('Erreur' . $failures,400);
# }
# return new Response('OK',200);
# }
# catch(Exception $e)
# {
# return new Response('Erreur' . $failures,400);
# }
At the end, it returns an error
Connection could not be established with host smtp.gmail.com
This is pretty offensive because i know my password .
After a few minutes, i receive and email that tells me that someone tried to hack my account etc ...
Oh and i'm running this with Wamp, so in local.
Is this my code that has a problem or Google maybe ?
Thanks

Try adding the following to your swiftmail configuration as GMail requires encryption/ssl connection
encryption: ssl

Related

HTTPSConnectionPool(host='127.0.0.1', port=7545): Max retries exceeded with url: (Caused by NewConnectionError

# compile_standart is going to be the main function that we will use to compile this code.
from solcx import compile_standard, install_solc
import json
from web3 import Web3
import os
from dotenv import load_dotenv
load_dotenv()
with open("./SimpleStorage.sol", "r") as file:
simple_storage_file = file.read()
print("Installing...")
install_solc("0.6.0")
# print(simple_storage_file)
# compile our solidity
compiled_sol = compile_standard(
{
"language": "Solidity",
"sources": {"SimpleStorage.sol": {"content": simple_storage_file}},
"settings": {
"outputSelection": {
"*": {
"*": [
"abi",
"metadata",
"evm.bytecode",
"evm.bytecode.sourceMap",
] # (ABI=Application Binary Interface)EVM (Ethereum Virtual Machine) is the core component of the Ethereum network
}
}
},
},
solc_version="0.6.0",
)
# print(compiled_sol)
with open("compiled_code.json", "w") as file: # w means it wil wright from it
json.dump(
compiled_sol, file
) # is it's going to take our compiled soul jason variable and just dump it into this (file) here
# but still it is going to keep it in json syntax
# get bytecode
bytecode = compiled_sol["contracts"]["SimpleStorage.sol"]["SimpleStorage"]["evm"][
"bytecode"
]["object"]
# get abi
abi = json.loads(
compiled_sol["contracts"]["SimpleStorage.sol"]["SimpleStorage"]["metadata"]
)["output"]["abi"]
# for connecting to ganache
w3 = Web3(Web3.HTTPProvider("https://127.0.0.1:7545"))
chain_id = 5777
my_address = "0x630Ee320BcE235224184A31FC687a5D183142BB9"
private_key = "0xd3cf1f678e8a78ace754cf57bd6ebcb28852e9657bb371951d72bbb5a0a3f413"
# private_key = os.getenv(" PRIVATE_KEY ")
# print(private_key)
# Create the contract in Python
SimpleStorage = w3.eth.contract(abi=abi, bytecode=bytecode)
# print(SimpleStorage)
# Get the latest transaction
**nonce = w3.eth.getTransactionCount(my_address)**(I'm having this error in this line)[![**enter image description here**][1]][1]
# print (nounce)
# we could see that the number of transaction=0 because we haven't made any
# 1. Build a transaction
# 2. Sign a transaction
# 3 . Send a transaction
transaction = SimpleStorage.constructor().buildTransaction(
{" chainId ": chain_id, " from ": my_address, " nonce ": nonce}
)
# print(transaction)
signed_txn = w3.eth.account.sign_transaction(transaction, private_key=private_key)
print(signed_txn) # this is how we sign a transaction
tx_hash = w3.eth.send_raw_transaction(signed_txn.rawTransaction)
# 1 good practice to do when we are sending
# a transaction is to wait for some block confirmation to happen
# this will have our code stop and wait for this transaction hash to go through
tx_receipt = w3.eth.wait_for_transaction_receipt(tx_hash)
# working with contracts
# Contract Address
# Contract ABI
# Working with deployed Contracts
simple_storage = w3.eth.contract(address=tx_receipt.contractAddress, abi=abi)
# call->Simulate making the call and getting a return value
# Transact->Actually make a state change
# Initial value of a favorite number
print(simple_storage.functions.retrieve().call())
# store some value into this contract
store_transaction = simple_storage.functions.store(15).buildTransaction(
{
"chainId": chain_id,
"gasPrice": w3.eth.gas_price,
"from": my_address,
"nonce": nonce
+ 1,
}
)
signed_store_txn = w3.eth.account.sign_transaction(
store_transaction, private_key=private_key
)
send_store_tx = w3.eth.send_raw_transaction(signed_store_txn.rawTransaaction)
tx_receipt = w3.eth.wait_for_transaction_receipt(send_store_tx)
#[1]: https://i.stack.imgur.com/sPikF.png
During handling of the above exception, another exception occurred:
During handling of the above exception, another exception occurred:
During handling of the above exception, another exception occurred:
During handling of the above exception, another exception occurred:
During handling of the above exception, another exception occurred:

SPNEGO uses wrong KRBTGT principal name

I am trying to enable Kerberos authentication for our website - The idea is to have users logged into a Windows AD domain get automatic login (and initial account creation)
Before I tackle the Windows side of things, I wanted to get it work locally.
So I made a test KDC/KADMIN container using git#github.com:ist-dsi/docker-kerberos.git
Thee webserver is in a local docker container with nginx and the spnego module compiled in.
The KDC/KADMIN container is at 172.17.0.2 and accessible from my webserver container.
Here is my local krb.conf:
default_realm = SERVER.LOCAL
[realms]
SERVER.LOCAL = {
kdc_ports = 88,750
kadmind_port = 749
kdc = 172.17.0.2:88
admin_server = 172.17.0.2:749
}
[domain_realms]
.server.local = SERVER.LOCAL
server.local = SERVER.LOCAL
and the krb.conf on the webserver container
[libdefaults]
default_realm = SERVER.LOCAL
default_keytab_name = FILE:/etc/krb5.keytab
ticket_lifetime = 24h
kdc_timesync = 1
ccache_type = 4
forwardable = false
proxiable = false
[realms]
LOCALHOST.LOCAL = {
kdc_ports = 88,750
kadmind_port = 749
kdc = 172.17.0.2:88
admin_server = 172.17.0.2:749
}
[domain_realms]
.server.local = SERVER.LOCAL
server.local = SERVER.LOCAL
Here is the principals and keytab config (keytab is copied to the web container under /etc/krb5.keytab)
rep ~/project * rep_krb_test $ kadmin -p kadmin/admin#SERVER.LOCAL -w hunter2
Authenticating as principal kadmin/admin#SERVER.LOCAL with password.
kadmin: list_principals
K/M#SERVER.LOCAL
kadmin/99caf4af9dc5#SERVER.LOCAL
kadmin/admin#SERVER.LOCAL
kadmin/changepw#SERVER.LOCAL
krbtgt/SERVER.LOCAL#SERVER.LOCAL
noPermissions#SERVER.LOCAL
rep_movsd#SERVER.LOCAL
kadmin: q
rep ~/project * rep_krb_test $ ktutil
ktutil: addent -password -p rep_movsd#SERVER.LOCAL -k 1 -f
Password for rep_movsd#SERVER.LOCAL:
ktutil: wkt krb5.keytab
ktutil: q
rep ~/project * rep_krb_test $ kinit -C -p rep_movsd#SERVER.LOCAL
Password for rep_movsd#SERVER.LOCAL:
rep ~/project * rep_krb_test $ klist
Ticket cache: FILE:/tmp/krb5cc_1000
Default principal: rep_movsd#SERVER.LOCAL
Valid starting Expires Service principal
02/07/20 04:27:44 03/07/20 04:27:38 krbtgt/SERVER.LOCAL#SERVER.LOCAL
The relevant nginx config:
server {
location / {
uwsgi_pass django;
include /usr/lib/proj/lib/wsgi/uwsgi_params;
auth_gss on;
auth_gss_realm SERVER.LOCAL;
auth_gss_service_name HTTP;
}
}
Finally etc/hosts has
# use alternate local IP address
127.0.0.2 server.local server
Now I try to access this with curl:
* Trying 127.0.0.2:80...
* Connected to server.local (127.0.0.2) port 80 (#0)
* gss_init_sec_context() failed: Server krbtgt/LOCAL#SERVER.LOCAL not found in Kerberos database.
* Server auth using Negotiate with user ''
> GET / HTTP/1.1
> Host: server.local
> User-Agent: curl/7.71.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
....
As you can see it is trying to use the SPN "krbtgt/LOCAL#SERVER.LOCAL" whereas kinit has "krbtgt/SERVER.LOCAL#SERVER.LOCAL" as the SPN
How do I get this to work?
Thanks in advance..
So it turns out that I needed
auth_gss_service_name HTTP/server.local;
Some other tips for issues encountered:
Make sure the keytab file is readable by the web server process with user www-data or whatever user
Make sure the keytab principals are in the correct order
Use export KRB5_TRACE=/dev/stderr and curl to test - kerberos gives a very detailed log of what it's doing and why it fails

StormCrawler DISCOVER and FETCH a website but nothing gets saved in docs

There is a website that I'm trying to crawl, the crawler DISCOVER and FETCH the URLs but there is nothing in docs. this is
the website https://cactussara.ir. where is the problem?!
And this is the robots.txt of this website:
User-agent: *
Disallow: /
And this is my urlfilters.json:
{
"com.digitalpebble.stormcrawler.filtering.URLFilters": [
{
"class": "com.digitalpebble.stormcrawler.filtering.basic.BasicURLFilter",
"name": "BasicURLFilter",
"params": {
"maxPathRepetition": 8,
"maxLength": 8192
}
},
{
"class": "com.digitalpebble.stormcrawler.filtering.depth.MaxDepthFilter",
"name": "MaxDepthFilter",
"params": {
"maxDepth": -1
}
},
{
"class": "com.digitalpebble.stormcrawler.filtering.basic.BasicURLNormalizer",
"name": "BasicURLNormalizer",
"params": {
"removeAnchorPart": true,
"unmangleQueryString": true,
"checkValidURI": true,
"removeHashes": false
}
},
{
"class": "com.digitalpebble.stormcrawler.filtering.host.HostURLFilter",
"name": "HostURLFilter",
"params": {
"ignoreOutsideHost": true,
"ignoreOutsideDomain": false
}
},
{
"class": "com.digitalpebble.stormcrawler.filtering.regex.RegexURLNormalizer",
"name": "RegexURLNormalizer",
"params": {
"regexNormalizerFile": "default-regex-normalizers.xml"
}
},
{
"class": "com.digitalpebble.stormcrawler.filtering.regex.RegexURLFilter",
"name": "RegexURLFilter",
"params": {
"regexFilterFile": "default-regex-filters.txt"
}
}
]
}
And this is crawler-conf.yaml:
# Default configuration for StormCrawler
# This is used to make the default values explicit and list the most common configurations.
# Do not modify this file but instead provide a custom one with the parameter -conf
# when launching your extension of ConfigurableTopology.
config:
fetcher.server.delay: 1.0
# min. delay for multi-threaded queues
fetcher.server.min.delay: 0.0
fetcher.queue.mode: "byHost"
fetcher.threads.per.queue: 1
fetcher.threads.number: 10
fetcher.max.urls.in.queues: -1
fetcher.max.queue.size: -1
# max. crawl-delay accepted in robots.txt (in seconds)
fetcher.max.crawl.delay: 30
# behavior of fetcher when the crawl-delay in the robots.txt
# is larger than fetcher.max.crawl.delay:
# (if false)
# skip URLs from this queue to avoid that any overlong
# crawl-delay throttles the crawler
# (if true)
# set the delay to fetcher.max.crawl.delay,
# making fetcher more aggressive than requested
fetcher.max.crawl.delay.force: false
# behavior of fetcher when the crawl-delay in the robots.txt
# is smaller (ev. less than one second) than the default delay:
# (if true)
# use the larger default delay (fetcher.server.delay)
# and ignore the shorter crawl-delay in the robots.txt
# (if false)
# use the delay specified in the robots.txt
fetcher.server.delay.force: false
# time bucket to use for the metrics sent by the Fetcher
fetcher.metrics.time.bucket.secs: 10
# SimpleFetcherBolt: if the delay required by the politeness
# is above this value, the tuple is sent back to the Storm queue
# for the bolt on the _throttle_ stream.
fetcher.max.throttle.sleep: -1
# alternative values are "byIP" and "byDomain"
partition.url.mode: "byHost"
# metadata to transfer to the outlinks
# used by Fetcher for redirections, sitemapparser, etc...
# these are also persisted for the parent document (see below)
# metadata.transfer:
# - customMetadataName
# lists the metadata to persist to storage
# these are not transfered to the outlinks
metadata.persist:
- _redirTo
- error.cause
- error.source
- isSitemap
- isFeed
metadata.track.path: true
metadata.track.depth: true
http.agent.name: "Anonymous Coward"
http.agent.version: "1.0"
http.agent.description: "built with StormCrawler ${version}"
http.agent.url: "http://someorganization.com/"
http.agent.email: "someone#someorganization.com"
http.accept.language: "fa-IR,fa_IR,en-us,en-gb,en;q=0.7,*;q=0.3"
http.accept: "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8"
http.content.limit: -1
http.store.headers: false
http.timeout: 10000
http.skip.robots: true
# store partial fetches as trimmed content (some content has been fetched,
# but reading more data from socket failed, eg. because of a network timeout)
http.content.partial.as.trimmed: false
# for crawling through a proxy:
# http.proxy.host:
# http.proxy.port:
# okhttp only, defaults to "HTTP"
# http.proxy.type: "SOCKS"
# for crawling through a proxy with Basic authentication:
# http.proxy.user:
# http.proxy.pass:
http.robots.403.allow: true
# should the URLs be removed when a page is marked as noFollow
robots.noFollow.strict: false
# Guava caches used for the robots.txt directives
robots.cache.spec: "maximumSize=10000,expireAfterWrite=6h"
robots.error.cache.spec: "maximumSize=10000,expireAfterWrite=1h"
protocols: "http,https,file"
http.protocol.implementation: "com.digitalpebble.stormcrawler.protocol.httpclient.HttpProtocol"
https.protocol.implementation: "com.digitalpebble.stormcrawler.protocol.httpclient.HttpProtocol"
file.protocol.implementation: "com.digitalpebble.stormcrawler.protocol.file.FileProtocol"
# navigationfilters.config.file: "navigationfilters.json"
# selenium.addresses: "http://localhost:9515"
selenium.implicitlyWait: 0
selenium.pageLoadTimeout: -1
selenium.setScriptTimeout: 0
selenium.instances.num: 1
selenium.capabilities:
takesScreenshot: false
loadImages: false
javascriptEnabled: true
# illustrates the use of the variable for user agent
# phantomjs.page.settings.userAgent: "$userAgent"
# ChromeDriver config
# goog:chromeOptions:
# args:
# - "--headless"
# - "--disable-gpu"
# - "--mute-audio"
# DelegatorRemoteDriverProtocol
selenium.delegated.protocol: "com.digitalpebble.stormcrawler.protocol.httpclient.HttpProtocol"
# no url or parsefilters by default
parsefilters.config.file: "parsefilters.json"
urlfilters.config.file: "urlfilters.json"
# JSoupParserBolt
jsoup.treat.non.html.as.error: false
parser.emitOutlinks: true
parser.emitOutlinks.max.per.page: -1
track.anchors: true
detect.mimetype: true
detect.charset.maxlength: 10000
# filters URLs in sitemaps based on their modified Date (if any)
sitemap.filter.hours.since.modified: -1
# staggered scheduling of sitemaps
sitemap.schedule.delay: -1
# whether to add any sitemaps found in the robots.txt to the status stream
# used by fetcher bolts
sitemap.discovery: false
# Default implementation of Scheduler
scheduler.class: "com.digitalpebble.stormcrawler.persistence.DefaultScheduler"
# revisit a page daily (value in minutes)
# set it to -1 to never refetch a page
fetchInterval.default: 1440
# revisit a page with a fetch error after 2 hours (value in minutes)
# set it to -1 to never refetch a page
fetchInterval.fetch.error: 120
# never revisit a page with an error (or set a value in minutes)
fetchInterval.error: -1
# custom fetch interval to be used when a document has the key/value in its metadata
# and has been fetched succesfully (value in minutes)
# fetchInterval.FETCH_ERROR.isFeed=true
# fetchInterval.isFeed=true: 10
# max number of successive fetch errors before changing status to ERROR
max.fetch.errors: 3
# Guava cache use by AbstractStatusUpdaterBolt for DISCOVERED URLs
status.updater.use.cache: true
status.updater.cache.spec: "maximumSize=10000,expireAfterAccess=1h"
# Can also take "MINUTE" or "HOUR"
status.updater.unit.round.date: "SECOND"
# configuration for the classes extending AbstractIndexerBolt
# indexer.md.filter: "someKey=aValue"
indexer.url.fieldname: "url"
indexer.text.fieldname: "content"
indexer.text.maxlength: -1
indexer.canonical.name: "canonical"
indexer.md.mapping:
- parse.title=title
- parse.keywords=keywords
- parse.description=description
Thanks in advance.
The pages contain
<meta name="robots" content="noindex,follow"/>
which are found by the parser and causes the indexer bolt to skip the page.
This should be confirmed in the metrics where Filtered should be the same number as the pages fetched.
http.skip.robots does not apply to the directives set in the page itself.

Perl for SNMP V3 Not Working, but works with SNMP V1/2 (Redhat Linux)

I have a Perl Script, which registers SNMP OIDs. With SNMP 1/2c, it is able to successfully register all OIDs. However, with SNMP V3, it only partially works.
As you see below, with SNMP V3, it is able to register "$root_OID.0.0.0" successfully. However, it timeouts when trying to invoke the java code for "$root_OID.0.0.1".
Does anyone know, why I'm able to make a successful java call in SNMP V1/2c, but not SNMP V3?
Many Thanks
Here is my Perl script:
#!/usr/bin/perl
use NetSNMP::OID (':all');
use NetSNMP::ASN qw(ASN_OCTET_STR ASN_INTEGER);
use NetSNMP::agent (':all');
sub myhandler {
my ($handler, $registration_info, $request_info, $requests) = #_;
my $request;
my $root_OID = ".1.3.6.1.4.1.8072.9999.9999.0";
my $CLASSPATH = "/opt/BPL/JBoss/BPL_JBossJMX.jar:/opt/jboss-5.1/client/*";
my $CLASSNAME = "com.XXXXX.XXXXX.XXXXX.jmx.BPLJbossJMX_For_SNMP";
my $ENV = "localhost";
my $PORT = "8099";
my $LOG4JFILELOC = "/opt/BPL/JBoss/JBoss-BPL-Log4j.xml";
for($request = $requests; $request; $request = $request->next()) {
my $oid = $request->getOID();
if ($request_info->getMode() == MODE_GETNEXT) {
if ($oid < new NetSNMP::OID("$root_OID.0.0.0")) {
my $INPUTSTRNAME = "HeapMemoryUsageZZZZZ";
$request->setOID("$root_OID.0.0.0");
$request->setValue(ASN_OCTET_STR, $INPUTSTRNAME);
} elsif ($oid < new NetSNMP::OID("$root_OID.0.0.1")) {
my $INPUTSTRNAME = "HeapMemoryUsage";
my $OUTPUT= `java -cp $CLASSPATH $CLASSNAME $ENV $PORT $INPUTSTRNAME $LOG4JFILELOC`;
chomp($OUTPUT);
$request->setOID("$root_OID.0.0.1");
$request->setValue(ASN_INTEGER, $OUTPUT);
}
}
}
}
my $rootOID = ".1.3.6.1.4.1.8072.9999.9999.0";
my $regoid = new NetSNMP::OID($rootOID);
$agent->register("BPL-JBoss", $regoid, \&myhandler);
Here is my /etc/snmp/snmpd.conf file (SNMP V1/2c disabled):
###############################################################################
# snmpd.conf:
###############################################################################
#com2sec notConfigUser default public
# groupName securityModel securityName
#group notConfigGroup v1 notConfigUser
#group notConfigGroup v2c notConfigUser
view systemview included .1.3.6.1.4.1.8072.1.3.2
view systemview included .1.3.6.1.2.1
view systemview included .1.3.6.1.2.1.25.1.1
view systemview included .1.3.6.1.4.1.2021
view systemview included .1.3.6.1.4.1.8072.9999.9999
#access notConfigGroup "" any noauth exact systemview none none
###############################################################################
syslocation Unknown (edit /etc/snmp/snmpd.conf)
syscontact Root <root#localhost> (configure /etc/snmp/snmp.local.conf)
###############################################################################
pass .1.3.6.1.4.1.4413.4.1 /usr/bin/ucd5820stat
###############################################################################
perl do "/home/XXXXXXX/JBoss_hello_world.pl"
rouser TEST_USERNAME priv
Here is the results of my SNMPWALK, when using SNMPV3.
-$snmpwalk -v 3 -l authPriv -a sha -A TEST_PASSWORD -x AES -X TEST_PASSWORD -u TEST_USERNAME localhost .1.3.6.1.4.1.8072.9999.9999
NET-SNMP-MIB::netSnmpPlaypen.0.0.0.0 = STRING: "HeapMemoryUsageZZZZZ"
Timeout: No Response from localhost

Setting up varnish on same server as webserver

Our company recently decided to start working with the Varnish HTTP accelerator. Most important why we chose this solution was because we are a company that specializes in building web shops (Magento Enterprise) => Magento has a commercial plugin that works together with varnish.
The varnish configuration is already present on our testing environment, which contains 1 (software) load balancer running a varnish instance, 2 apache webservers and 1 storage + 1 mysql server.
However now the time has come to add the Varnish to our development environment (virtualbox with 1GB of ram running debian which has the database, webserver, files running all on the same machine)
Could anyone post a default.vcl configuration file for this setup?
Apache2 runs on port 80.
Thanks in advance,
Kenny
EDIT: I found and posted the solution below.
This link has an excellent discussion of using Varnish on big production Web sites. In particular, look at the /etc/default/varnish or /etc/sysconfig/varnish DAEMON OPTS that put the cache 'file' into memory, instead of disk:
http://www.lullabot.com/articles/varnish-multiple-web-servers-drupal
The snippet I'm talking about:
DAEMON_OPTS="-a :80,:443 \
-T localhost:6082 \
-f /etc/varnish/default.vcl \
-u varnish -g varnish \
-S /etc/varnish/secret \
-p thread_pool_add_delay=2 \
-p thread_pools=2 \
-p thread_pool_min=400 \
-p thread_pool_max=4000 \
-p session_linger=50 \
-p sess_workspace=262144 \
-s malloc,3G"
I found the solution after more searching. Basically we need to sure that varnish is listening on the 80 port and apache on the 8080 port (or anything else!).
Here my default.vcl file (located in /etc/varnish/default.vcl):
# default backend definition. Set this to point to your content server.
backend apache1 {
.host = "127.0.0.1";
.port = "8080";
}
director lb round-robin {
{.backend=apache1;}
}
# add your Magento server IP to allow purges from the backend
acl purge {
"localhost";
"127.0.0.1";
}
# needed for TTL handling
C{
#include <errno.h>
#include <limits.h>
}C
sub vcl_recv {
set req.backend=lb;
if (req.request != "GET" &&
req.request != "HEAD" &&
req.request != "PUT" &&
req.request != "POST" &&
req.request != "TRACE" &&
req.request != "OPTIONS" &&
req.request != "DELETE" &&
req.request != "PURGE") {
/* Non-RFC2616 or CONNECT which is weird. */
return (pipe);
}
# purge request
if (req.request == "PURGE") {
if (!client.ip ~ purge) {
error 405 "Not allowed.";
}
purge("obj.http.X-Purge-Host ~ " req.http.X-Purge-Host " && obj.http.X-Purge-URL ~ " req.http.X-Purge-Regex " && obj.http.Content-Type ~ " req.http.X-Purge-Content-Type);
error 200 "Purged.";
}
# we only deal with GET and HEAD by default
if (req.request != "GET" && req.request != "HEAD") {
return (pass);
}
# static files are always cacheable. remove SSL flag and cookie
if (req.url ~ "^/(media|js|skin)/.*\.(png|jpg|jpeg|gif|css|js|swf|ico)$") {
unset req.http.Https;
unset req.http.Cookie;
}
# not cacheable by default
if (req.http.Authorization || req.http.Https) {
return (pass);
}
# do not cache any page from
# - index files
# - ...
if (req.url ~ "^/(index)") {
return (pass);
}
# as soon as we have a NO_CACHE or admin cookie pass request
if (req.http.cookie ~ "(NO_CACHE|adminhtml)=") {
return (pass);
}
# normalize Aceept-Encoding header
# http://varnish.projects.linpro.no/wiki/FAQ/Compression
if (req.http.Accept-Encoding) {
if (req.url ~ "\.(jpg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf|flv)$") {
# No point in compressing these
remove req.http.Accept-Encoding;
} elsif (req.http.Accept-Encoding ~ "gzip") {
set req.http.Accept-Encoding = "gzip";
} elsif (req.http.Accept-Encoding ~ "deflate" && req.http.user-agent !~ "MSIE") {
set req.http.Accept-Encoding = "deflate";
} else {
# unkown algorithm
remove req.http.Accept-Encoding;
}
}
# remove Google gclid parameters
set req.url = regsuball(req.url,"\?gclid=[^&]+$",""); # strips when QS = "?gclid=AAA"
set req.url = regsuball(req.url,"\?gclid=[^&]+&","?"); # strips when QS = "?gclid=AAA&foo=bar"
set req.url = regsuball(req.url,"&gclid=[^&]+",""); # strips when QS = "?foo=bar&gclid=AAA" or QS = "?foo=bar&gclid=AAA&bar=baz"
# decided to cache. remove cookie
#unset req.http.Cookie;
return (lookup);
}
Here's the content of the varnish file (/etc/default/varnish):
# Configuration file for varnish
#
# /etc/init.d/varnish expects the variables $DAEMON_OPTS, $NFILES and $MEMLOCK
# to be set from this shell script fragment.
#
# Should we start varnishd at boot? Set to "yes" to enable.
START=yes
# Maximum number of open files (for ulimit -n)
NFILES=131072
# Maximum locked memory size (for ulimit -l)
# Used for locking the shared memory log in memory. If you increase log size,
# you need to increase this number as well
MEMLOCK=82000
# Default varnish instance name is the local nodename. Can be overridden with
# the -n switch, to have more instances on a single server.
INSTANCE=$(uname -n)
# This file contains 4 alternatives, please use only one.
## Alternative 1, Minimal configuration, no VCL
#
# Listen on port 6081, administration on localhost:6082, and forward to
# content server on localhost:8080. Use a 1GB fixed-size cache file.
#
# DAEMON_OPTS="-a :6081 \
# -T localhost:6082 \
# -b localhost:8080 \
# -u varnish -g varnish \
# -S /etc/varnish/secret \
# -s file,/var/lib/varnish/$INSTANCE/varnish_storage.bin,1G"
## Alternative 2, Configuration with VCL
#
# Listen on port 6081, administration on localhost:6082, and forward to
# one content server selected by the vcl file, based on the request. Use a 1GB
# fixed-size cache file.
#
DAEMON_OPTS="-a :80 \
-T localhost:6082 \
-f /etc/varnish/default.vcl \
-S /etc/varnish/secret \
-s file,/var/lib/varnish/$INSTANCE/varnish_storage.bin,1G"
## Alternative 3, Advanced configuration
#
# See varnishd(1) for more information.
#
# # Main configuration file. You probably want to change it :)
# VARNISH_VCL_CONF=/etc/varnish/default.vcl
#
# # Default address and port to bind to
# # Blank address means all IPv4 and IPv6 interfaces, otherwise specify
# # a host name, an IPv4 dotted quad, or an IPv6 address in brackets.
# VARNISH_LISTEN_ADDRESS=
# VARNISH_LISTEN_PORT=6081
#
# # Telnet admin interface listen address and port
# VARNISH_ADMIN_LISTEN_ADDRESS=127.0.0.1
# VARNISH_ADMIN_LISTEN_PORT=6082
#
# # The minimum number of worker threads to start
# VARNISH_MIN_THREADS=1
#
# # The Maximum number of worker threads to start
# VARNISH_MAX_THREADS=1000
#
# # Idle timeout for worker threads
# VARNISH_THREAD_TIMEOUT=120
#
# # Cache file location
# VARNISH_STORAGE_FILE=/var/lib/varnish/$INSTANCE/varnish_storage.bin
#
# # Cache file size: in bytes, optionally using k / M / G / T suffix,
# # or in percentage of available disk space using the % suffix.
# VARNISH_STORAGE_SIZE=1G
#
# # File containing administration secret
# VARNISH_SECRET_FILE=/etc/varnish/secret
#
# # Backend storage specification
# VARNISH_STORAGE="file,${VARNISH_STORAGE_FILE},${VARNISH_STORAGE_SIZE}"
#
# # Default TTL used when the backend does not specify one
# VARNISH_TTL=120
#
# # DAEMON_OPTS is used by the init script. If you add or remove options, make
# # sure you update this section, too.
# DAEMON_OPTS="-a ${VARNISH_LISTEN_ADDRESS}:${VARNISH_LISTEN_PORT} \
# -f ${VARNISH_VCL_CONF} \
# -T ${VARNISH_ADMIN_LISTEN_ADDRESS}:${VARNISH_ADMIN_LISTEN_PORT} \
# -t ${VARNISH_TTL} \
# -w ${VARNISH_MIN_THREADS},${VARNISH_MAX_THREADS},${VARNISH_THREAD_TIMEOUT} \
# -S ${VARNISH_SECRET_FILE} \
# -s ${VARNISH_STORAGE}"
#
## Alternative 4, Do It Yourself
#
# DAEMON_OPTS=""
After that you can monitor how varnish serves the content (from what source) by typing
varnishlog | grep URL
Apache can be used to SSL terminate (decrypt), check http://noosfero.org/Development/Varnish#SSL

Resources