I built and installed squid 3.5.23 as follows:
./configure --prefix=/usr/local/squid
make all
make install
Here is the default squid.conf used by the version. I made minimal modifications to to the file to make my setup anonymous:
forwarded_for delete
request_header_access Via deny all
request_header_access Cache-Control deny all
After I got the (remote) proxy server running, I confirmed that I could configure my (local) browser to send traffic through it. I then took it to the next step, and had my router send all traffic originating from my local network to my proxy server:
iptables -t nat -A PREROUTING -s 192.168.11.0/24 -d ! 192.168.11.0/24 -p tcp -j DNAT --to-destination 100.200.30.40:3128
However, all of my requests came back with a 400 from squid (BAD REQUEST). On investigating further, I discovered that the request headers were using relative urls (my browser is smart enough to always use absolute urls if it knows it is talking to a proxy server).
I know HTTP 1.1 headers are required to have a Host header, which squid can use to determine the original destination of packets it receives. How do I configure the proxy server to use that header? I am looking for the squid 3.5 equivalent of httpd_accel_uses_host_header on
Running squid in accelerator mode fixed this:
http_port 3128 accel
Related
When generating an SSL paid or self signed you assign a set of specific domains (wildcard or not), known as canonical names. If you use this SSL to open domains which are not on the list Chrome gives warning - NET::ERR_CERT_COMMON_NAME_INVALID - you know, click advanced > Proceed Unsafe.
I use the same certificate on Charles Proxy which opens all urls fine on chrome, without warning. Viewing on dev options > security > view certificate, I can see that it's my certificate, my domain etc. However Charles changes the domains on the cert automatically for any website you visit, which pass all Chrome validations / warnings.
-- >
How can I achieve this?
Preferably using Nginx or NodeJS via https.createServer(...)
Not worried about how to bypass chrome but how can a .cer be modified so instantly for each http request and be served to the browser.
Solved!
There are several options which include mitmproxy, sslsniff and my favorite SSLSPLIT
It is available for all distros, prepackaged, install via apt-get or yum install sslsplit and that's all.
You simply need to run 1 command, simply package your certificate with key and bundle into 1 pem file and run this:
Forward the port though NAT via iptables and then run sslsplit
iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 443 -j REDIRECT --to-port 8888
sslsplit -p /path/anywhere.pid -c certbundle.pem -l connections.log ssl 0.0.0.0 8888 sni 443
It reissues new certificates on the fly with modified subject names as well as log all traffic if you wish. Bypasses all chrome validations and it is quite fast. It doesn't proxy through Nginx though as I was hoping.
.
--- Edit 1/6/2018
Also found a node solution which is beyond what I need https://www.npmjs.com/package/node-forge
I'm trying to make the wildfly work on ubuntu in production.
I was able to make it work with its standard 8080 and 8443 ports, and managed to redirect ports 80 to 8080 and 443 to 8443 using iptables from ubuntu.
But when performing this redirection, the page opens in https but the h2 protocol (HTTP / 2) and gzip do not work.
If I go direct in the standard wildfly protocol (www.example.com:8443) gzip and h2 work perfectly.
Here is the iptables redirect command:
Iptables -t nat -A PREROUTING -i eth0 -p tcp -dport 80 -j REDIRECT
--to-port 8080
Iptables -t nat -A PREROUTING -i eth0 -p tcp -dport 443 -j REDIRECT
--to-port 8443
I've tried using nginx to do the redirect and the same problem happens.
I also tried configuring wildfly to use port 80 and 443 directly but Ubuntu does not allow it.
I have the following status in firewall:
ufw status verbose of server
If there is a way to make the wildfly in port 80 and 443 or make the redirect work in h2 and gzip.
System:
Ubuntu : 16.04.1
Wildfly : 10.1.0.Final
Please help me solve this problem.
Thank you very much.
I just found the solution.
The problem is in my Windows 10 Anti-Virus (More specifically BitDefender 2017).
All the tests I did was on a Windows 10 operating system, by the time I switched to Linux (I have dual boot) the site finally got http2
So I saw that the name of the issuer of the certificate that was being used was: Bitdefender Personal CA.Net-Defender.
It was at this point that I realized that my certificate created by letsencrypt was being overwritten by another bitdefender certificate.
SOLUTION: In BitDefender enter the module settings, and go to the internet module and disable the option to verify SSL certificates. Restart your browser and you're done.
So beware when testing a website using an antivirus.
I have magento website in Linux server (Varnish cache), some of the product details page shows error as
Error 503 Backend fetch failed Guru Meditation: XID: 98757
My website IP is 52.163.xxx.xx
Please find the below details and help me to fix this issue.
/etc/default/varnish
DAEMON_OPTS="-a :8080 \
-T localhost:6082 \
-f /etc/varnish/default.vcl \
-S /etc/varnish/secret \
-s malloc,256m"
/etc/varnish/default.vcl
backend default{
.host = "127.0.0.1";
.port = "8080";
}
sudo service varnish restart
Stopping HTTP accelerator varnishd No /usr/sbin/varnishd found running; none killed.
[fail]
Starting HTTP accelerator varnishd [fail]
bind(): Address already in use
bind(): Address already in use
Error: Failed to open (any) accept sockets.
As I understand it, you are running varnish and backend webserver (say nginx or apache) on the very same linux machine, right?
First of all, try to run this command:
sudo netstat -anp | grep LISTEN | grep 8080
And see what process is bound on port 8080 and on which ip.
First part of your question suggests varnish is running, just not be able to connect to backend.
But the second part tells me you are not able to start varnish.
So please make it clear and perhaps attach output from the command above.
Let's continue with second part, i.e. varnish not able to start.
I guess you have backend server running on 8080, be it nginx, apache, whatever.
Your varnish backend config confirms it after all.
Check that web server is bound on 127.0.0.1 and not on 0.0.0.0 not to allow public traffic to connect directly do backend web server.
If this is the case, you have to change listening ip:port of varnish to non-colliding combination.
You can either:
change port to something else as 8080, let's say 80
change port of backend web server to something else if you need 8080 to be public
double check your backend web server is listening on localhost only and bind varnish to your public ip instead of 0.0.0.0 (default, means all machine's ips)
You can do the last option by changing main varnish configuration to:
DAEMON_OPTS="-a 52.163.xxx.xx:8080 \
-T localhost:6082 \
-f /etc/varnish/default.vcl \
-S /etc/varnish/secret \
-s malloc,256m"
This scenario has one important drawback. If you somehow come to new public ip, you have to change it in main varnish configuration too. If this is something you can encode into automation recipe, it shouldn't be problem. But if you manage it by hand, be sure you have really good documenting practice or you'll be hunting ghost bugs in future. :)
One mistake is having both Varnish and your backend server running on the same port 8080. You have 2 options to solve this:
Most straightforward and simple. Adjust Varnish DAEMON_OPTS to listen on port 80.
It may still work on the same ports, provided that you make Varnish and your backend server listen on different interfaces:
Varnish would normally listen on external interface. Thus, adjust your Varnish listen parameter to be bound to specific IP: DAEMON_OPTS="-a 52.163.xxx.xx:8080 ...
Bind your backend server (Apache, Nginx, whatever) to listen only on the loopback interface, 127.0.0.1.
Your VCL is "empty" and you should be using corresponding plugin for Magento which will ensure that Varnish caches things, by generating correct VCL file for you:
Magento 1.x: Turpentine plugin
Magento 2.x: .. is able to generate VCL from admin backend of your Magento installation.
Good day,
I have a setup in which I am routing my received packets at my Mikrotik router to a squid server.
I also can see the incoming traffic with Tcpdump that it is actually ariving # the correct port (443) on Squid Proxy server.
On the next step I have
iptables -t nat -A PREROUTING -p tcp --dport 443 -j DNAT --to 10.0.2.51:3127
(that is all I have on iptable rules)
Which routes the received 443 traffic to port 3127 which is my squid SSL port.
I am getting page not found error on my browser.
Now I know that my Squid is setup correctly, because when I input the proxy server adress manually 10.0.2.51:3127 for SSL in the Mozilla browser all is working great, all SSL pages are logged with SSLbump.
Could someone please help with figuring out why this isn't working correctly, I am quite new to proxies?
You are DNATing packets going to the proxy.
But are you SNATing the packets coming back from the proxy ?
Is it possible to redirect outgoing connection back to localhost using iptables?
For example, if php script requests someonlinesite.com/bla.php then it would redirect to 127.0.0.1/bla.php
OS: Debian 7
The question does not really make much sense the way it currently is asked.
Most likely you are trying to redirect a http request? Then you should take a closer look at your systems name resolution, since that is the step that translates the host name someonlinesite.com to an ip address. So that is where you want to manipulate.
You might also want to consider using a proxy as an alternative. But a pure iptables based solution is questionable, since in typical setups the local http server will not react to incoming requests to a remote ip address...
try with:
iptables -t nat -A OUTPUT -d 0/0 -p tcp --dport 80 -j DNAT --to-destination 127.0.0.1:80
Thank you for replies, i managed to do it with hosts file.
/etc/hosts
127.0.0.1 domain.com
Now it redirects always to localhost when script tryes to reach domain.com