subdomain redirect to same backend in haproxy - linux

how can i configure host path redirection of all sub domains to a same backed server.
For example
my domain is example.com
sub domains are *.example.com
I need to redirect *.example.com/abc/ to another backed server.
My frontend ACLs are
acl host_star hdr(host) -i *.example.com
use_backend back_live if host_star
acl is_node path_beg -i /abc/
use_backend backend_node if host_star is_node
I need to go abc.example.com/abc/ and xyz.example.com/abc/ to same backend server

I completed it by using hdr_end(host)
acl host_star hdr_end(host) -i .example.com

Related

Point domain and subdomain from Route53 to DigitialOcean using https

I have two wordpress sites running on two digitialocean droplets.
They both have ssl certificates and redirects all requests to https.
Let's call the first adresse https://my-freenom-domain-1.ml
Let's call the second adresse https://my-freenom-domain-2.ml
I have a domain registered on route53. Let's call the domain my-domain.com.
I'm trying to map (not redirect) all requests from https://my-domain.com to https://my-freenom-domain-1.ml
and all request from https://subdomain.my-domain.com to https://my-freenom-domain-2.ml
How would you do this?
Update:
What I've tried (That didn't work)
Creating a simple CNAME.
CNAME for main domain (my-domain.com):
Cannot create a CNAME for main domain and gives the following error:
RRSet of type CNAME with DNS name my-domain.com. is not permitted at apex in zone my-domain.com.
CNAME for subdomain (subdomain.my-domain.com):
I am able to create a CNAME for the subdomain, but requests are redirected.
So when I go to subdomain.my-domain.com I'm redirected to https://my-freenom-domain-2.ml
Create a S3 "redirect-bucket"
I've tried creating a S3 bucket that redirects all requests for the subdomain.
So bucket named subdomain.my-domain.com, redirects all redirects to https://my-freenom-domain-2.ml (https).
I then created a CNAME for subdomain.my-domain.com pointing to subdomain.my-domain.com.s3-website-eu-west-1.amazonaws.com.
But all requests are still redirected...
You need to create a virtual host for your new domain on your Digital Ocean droplets for it to work.
So I would do following to make it work -
Create virtual host for the new domain on the webserver of droplets,
or add the new domain as server in the webserver config.
Add the SSL certificates of the new domain to the old webserver or alternatively terminate the ssl at the ELB.
Add the DNS CNAME or A record entry for the new domain pointing to the old domain servers.
Post this it should work.
This is based on #mdeora's answer correct answer with some details.
1. Create a virtual host for the domain (my-domain.com) in the droplet
Copy default conf:
sudo cp /etc/apache2/sites-available/000-default.conf /etc/apache2/sites-available/my-domain.com.conf
Add a ServerName to the conf:
<VirtualHost *:80>
ServerAdmin webmaster#localhost
ServerName my-domain.com
DocumentRoot /var/www/html
<Directory /var/www/html/>
Options FollowSymLinks
AllowOverride All
Require all granted
</Directory>
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>
Enable the site: a2ensite my-domain.com.conf
Reload apache: systemctl reload apache2
2. Install a ssl certificate on the droplets server
(I did it using certbot)
certbot --apache -d my-domain.com
(follow the certbot instructions)
3. Create an A record in route53
Create an A record and point it to the ip of the droplet.
(4. update wordpress settings)
If you're running a wordpress site be sure to change wordpress url settings in admin to https://my-domain.com.
Now, hopefully, everything should work.

How can I block all IP's, but allow 1 server ip in .htaccess

I'm trying to deny all requests sent to a website, but allow only 2 IP-addresses.
I've learned this should be done with .htaccess.
Basically there are 3 modules: Website Server, Form-handling Server and my own network IP.
Let's appoint the following IP addresses to the servers:
Website Server: 1.1.1.1
Form-handling Server: 2.2.2.2
Own Network: 3.3.3.3
The .htaccess is placed in the public_html directory of the form-handling server (2.2.2.2).
Now, this works:
order deny,allow
deny from all
allow from 3.3.3.3
The form-handling server is accessible with my own browser, but the form post request sent from the website is blocked. (which is good, in this case)
But when I edit the .htaccess to the following, the form post request is still blocked:
order deny,allow
deny from all
allow from 1.1.1.1
allow from 3.3.3.3
To make sure this .htaccess is functional I tried:
order deny,allow
deny from all
allow from 1.1.1.1
Now I cannot reach the Form-handling Server. Which proves the .htaccess is 'running'. (also, the Website Server cannot access the server tho..)
How can I achieve that the Website server has access to the Form-handling Server (and preferably me as well), but any other visitor/server hasn't?
Worth knowing: When I delete these lines from my .htaccess, the connection between the Website and Form-handling server works beautifully.
I am pretty sure your htaccess is ok. Most likely your webserver connects the form server with a different ip - i.e. the IP from the internal LAN between your webserver and your form server is different.

Squid + squidGuard not enforcing safe search on duckduckgo.com

The purpose of this project is to force safe search on major search engines.
I managed to install Squid (version 3.3) and SquidGuard, configured Squid as transparent proxy with SSL interception...
I managed to enforce safe search on Google, Yahoo and Bing, but I can't with Duckduckgo and I can't find any reasonable explanation (either on my own or in the web).
My Squid.conf is:
acl localnet src 192.168.1.0/24 # RFC1918 possible internal network
acl localnet src fc00::/7 # RFC 4193 local private network range
acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machin$
acl SSL_ports port 443
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
acl engines dstdomain .yahoo.com
acl engines dstdomain .duckduckgo.com
acl engines dstdomain .google.com
acl engines dstdomain .bing.com
cache deny all
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost manager
http_access deny manager
log_access allow all
url_rewrite_program /usr/bin/squidGuard -c /etc/squid/squidGuard.conf
url_rewrite_children 500
http_access allow localnet
http_access allow localhost
http_access deny all
http_port 3129
http_port 3128 intercept
https_port 3130 intercept ssl-bump connection-auth=off generate-host-certificates=on cert=/etc/squid/control.com.au.pem key=/etc/squid/control.com.au.pem cipher=ECDHE-RSA-RC4-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA:DHE-RSA-CAMELLIA128-SHA:RC4-SHA:HIGH:!aNull:!MD5:!ADH
ssl_bump none localhost
ssl_bump server-first engines
#ssl_bump server-first all
ssl_bump none all
always_direct allow all
sslproxy_cert_error deny all
sslproxy_flags DONT_VERIFY_PEER
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
refresh_pattern . 0 20% 4320
And the rewrite rule in SquidGuard is:
rewrite engines {
s#.*bing.com/search.*#&\&adlt=strict#i
s#.*bing.com/images.*#&\&adlt=strict#i
s#.*bing.com/videos.*#&\&adlt=strict#i
s#.*au.search.yahoo.com.*#&\&vm=r#i
s#.*duckduckgo.com.*#&\&kp=1#i
s#.*google.com.au.*#1&safe=strict#i
s#.*google.com.*#1&safe=strict#i
s#.*bing.com.*#&\&adlt=strict#i
}
I am pretty sure the squidGuard rewrite rule is fine because if I change the Squid configuration to intercept ALL SSL communication then duckduckgo.com gets enforced.
The question is what shall I enter instead of:
acl engines dstdomain .duckduckgo.com
??????????
Thanks in advance
I bet the above does not work with SquidGuard after June 23, 2015
"On 23 June 2015 the Google search services will move all search results behind SSL encryption. This means that all search results will then be served using 'https', with the secure padlock shown in web browsers."
Many schools and business are so pissed off they are now using:
"'SSL interception' functionality that can intercept and filter Google search results after Google implement their change. This also allows to subsequently address existing issues with other Google services like YouTube that have already moved to SSL."
You can force TRANSPARENT safe search for google (http and https) by setting:
Configure
set service dns forwarding options address=/.google.com/216.239.38.120
commit
save
DONE !!!! It works.
EXTRA BONUS:
IF YOU WANT TO BLOCK ALL ACCESS to ask and bing and duckduckgo and other domains, use:
configure
set service dns forwarding options address=/.bing.com/216.239.38.120
set service dns forwarding options address=/.ask.com/216.239.38.120
set service dns forwarding options address=/.duckduckgo.com/216.239.38.120
commit
save
This blocks bing and ask and duckduckgo domains on both http and https.
This is a little over a year after the fact, but I found this thread trying to solve this exact problem myself, so here goes.
In your squid config, you have:
acl engines dstdomain .yahoo.com
acl engines dstdomain .duckduckgo.com
acl engines dstdomain .google.com
acl engines dstdomain .bing.com
But that implies any subdomain beneath duckduckgo.com (i.e. www.duckduckgo.com, search.duckduckgo.com), but not duckduckgo.com.
When I do a DDG search, it's just using https://duckduckgo.com/$search_string, as so:
example duckduckgo search
So in short, your explicit ssl-bump acl engines is not matching duckduckgo because it's expecting subdomains, not the domain itself. When you change your config to "bump all", it's obviously catching it then, as it's catching everything.
If you exchange this line
acl engines dstdomain .duckduckgo.com
For this line
acl engines dstdomain duckduckgo.com
It'll work.

how to rewrite url in haproxy

Hi I am trying to redirect url and access using backend but i struct in configuration my initial configuration is
acl url_tag18 path_beg /v1
use_backend cdn if url_tag18
backend cdn
reqrep ^([^\ ]*\ )/v1(.*) wp/\1
server web02 24.222.145.72:80 cookie A check
I am trying to convert the below url
http://example.com/v1/auth_score/ghts/hjk/klk/jkjlj.js
to http://example.com/wp/example.com/v1/auth_score/ghts/hjk/klk/jkjlj.js
Please help me to
Change reqrep in your backend to something like this:
reqirep ^([^\ :]*)\ /v1/(.*) \1\ /wp/example.com/v1/\2
I have solved my question using below code in haproxy
acl url_tag19 path_beg -i /v1
use_backend cdn if url_tag19
redirect prefix /wp/example.com if url_tag19

Redirect all subdomains to main domain by .htaccess

I'm trying to redirect all sub-domains to the root domain. For example, my domain is www.example.com. When someone tries to connect to notexists.example.com, I want to redirect it to www.example.com.
This is first going to be limited to dns.
If DNS is not first setup then the client's web browser won't have an ip address to visit if no DNS records exist for that subdomain, so you will have no way of contacting apache for it to implement a server side redirection.
What you need is a wilcard subdomain/record. This is in the form of an A record:
* 14400 IN A 1.1.1.1
You will access to the httpd.conf file (root access), if you are using cPanel without root access add * as a subdomain:
https://www.namecheap.com/support/knowledgebase/article.aspx/9191/29/how-do-i-create-a-wildcard-subdomain-in-cpanel
If you do have access you will need to set up a virtual host - add the following to your httpd.conf file:
#
# Your VirtualHosts section
#
NameVirtualHost 1.2.3.4
##
# this one accepts any subdomain
##
<VirtualHost 111.22.33.55>
DocumentRoot /www/subdomain
ServerName hostname.domain.com
ServerAlias *.domain.com
</VirtualHost>
http://httpd.apache.org/docs/2.2/vhosts/

Resources