i need some alternatives to CloudFlare proxying option for hiding server ip of my domains.
What another servise can do this? Maybe some CloudFront settings or anithing else?
Tnx!
Assuming you want free solutions, here are a few that come to mind.
DDoS-Guard - A lot of sites use this as a Cloudflare proxy alternative, but recently their free plan has popped off their site. It didn't have any strict/hard limits. It may come back, so have put it here if it's temporary.
OVH SSL Gateway - Haven't heard there being any limitations either, this one is open to the public for free. Has 2 proxy locations currently.
G-Core Labs - Up to 1TB bandwidth/month for free, a lot of PoPs.
YAKUCAP - Up to 1TB/month bandwidth free.
Bitmitigate - Up to 100GB+/month website traffic and protected bandwidth, free.
ArvanCloud - Up to 100 GB Traffic/mo, but country availability limited.
Namecheap's FREE CDN - up to 50GB/mo traffic.
Hostry's Global CDN - 39 PoPs, up to 10GB/mo traffic with an overusage charge.
Self-hosted/other solutions that come to mind, if your not a fan of using provided services.
Oracle provides 24GB RAM ARM VPS as part of their free plan, with various limitations (involves network limitations). Could host things using Oracle, as traffic is DDoS protected. Doesn't necessarily hide your IP, but you could use the IP on the VPS to act as a proxy to your actual webserver.
Self-hosted DNS (isn't recommended due to uptime, anycast networks that cover the DNS, etc) - had a brief look, https://github.com/DNSCrypt/dnscrypt-proxy is one option, providing a secure self-hosted DNS. Seems like it's updated recently which may be what people are looking for.
Related
Guys I really want to know how to hide my IP Address. And know why is it necessary to hide it?
I tried using anonymox, but I want to hide it without using any third party software.
You could try the methods listed here : https://pc4u.org/windows-10-how-to-connect-to-a-free-vpn-without-going-through-third-party-software/ if you don't want to use a third party software to hide your IP Address. You need to setup a VPN in your computer to achieve that. This will tunnel your network connection.
Source: pc4u.org
The only way to accomplish this without 3rd party software would be to use an online proxy and configure your browser to use said proxy. This will only change your IP as it appears to sites you visit through the browser and not other services you may be using on your computer.
The "best" way to "hide" or change your IP is to use a VPN (which you'll need software for). You can purchase a VPN service from one of Many providers. Some of the providers have their own apps that you can use, or, you can manually configure your own OpenVPN client. There are also many great scripts out there, AWS one-click servers, and cheap VPS providers that make it easier than ever to create your own VPN server. This might be over the "average" users head though...
The benefit of using a VPN is that it not only changes your IP, also encrypt your traffic.
It should be noted that VPN's are not 100% fool-proof. If not configured properly, you will expose your real IP. In addition, many VPN providers are not reputable.
The best recommendation I can make on this and every other topic on online privacy is this site here:
https://privacytools.io
This is (IMO) the best, most comprehensive source of information about protecting your privacy online. They will guide you in the right direction regarding VPN's, proxies, securing your browser, and much, much more. Check it out. Seriously...
OpenVPN has builds for all operating systems. https://openvpn.net
Like i said though, you'll have to purchase access (or if you're brave, find a free 'solution') from a provider and then configure OpenVPN to use your purchased credentials. This is usually about 5$ a month (for the solid / no logs / unlimited bandwidth ones ). There are many, many posts about setting up OpenVPN here on stackoverflow.
Finally, as far as the proxies go, again, you can purchase access to some of the reputable ones or search for a free one - though, in my experience the free proxies are very touch and go.
I have been thinking about moving my domain over to my website hosting provider to store the DNS records inside cPanel. I believe it would be nice to keep both the website and domain together using one service.
My question is, are there any downsides to storing your DNS on your cPanel. I guess my concern would be if my hosting provider went down then I could possibly end up waiting for my DNS propagate again. If my TTL was set to 24 hours I could be experiencing a rather large downtime if I was unlucky enough.
How do other people normally reduce this risk? Should I be keeping a constant low TTL on my DNS at all times? Or should my DNS be hosted separate to my website. How do other people handle DNS downtime?
I have done some research regarding the matter but I haven't seen it discussed anywhere before and would just like some insight into the matter.
I finally found an answer to my question which was provided by my hosting company.
They run the DNS on a clustering system which means that even if the server goes down, DNS should continue to function, so no DNS propagation would need to re-occur should the hosting server go down.
I assume this would be common practice among shared hosting companies. It's definitely interesting to know.
What are the benefits of using Fastly versus simply having my own self-hosted Varnish? Are there additional benefits and features that Fastly provides that regular Varnish does not, or is it simply that Fastly is managed Varnish in the same way that CloudAMQP is hosted and managed RabbitMQ?
I just stumbled across this question, I know you asked this a while ago but I'm going to try and answer it for you regardless.
You are correct in assuming that Fastly manages the Varnish instances for you, so you don't have to deal with manually managing your servers. It is a slightly different concept than CloudAMQP however; CloudAMQP is a managed RabbitMQ system that lives in a specific datacenter, perhaps with Multi-AZ enabled for failover purposes.
Fastly is a full blown content delivery network which means they have machines running Varnish all over the world which could significantly increase your user's experience because of lower latency. For example if an Australian user visits your website he will be retrieving the cached content via Fastly's Australian machines, whereas if he were to connect to your own Varnish instance he'd probably have to connect to an instance in the U.S. which would introduce a lot more latency. On top of that it wouldn't only improve speed, but also reliability. Your single Varnish instance having a failure is quite likely, Fastly's global network of 1000s of machines running Varnish collapsing is very unlikely.
So to sum it up for you:
Speed
Reliability
Regards,
Rene.
I'm interested in cross-colo fail-over strategies for web applications, such that if the main site fails users seamlessly land at the fail-over site in another colo.
The application side of things looks to be mostly figured out with a master-slave database setup between the colos and services designed to recover and be able to pick up mid-stream. I'm trying to figure out the strategy for moving traffic from the main site to the fail-over site. DNS failover, even with low TTLs, seems to carry a fair bit of latency.
What strategies would you recommend for quickly moving traffic between colos, assuming the servers at the main colo are unreachable?
If you have other interesting experience / words of wisdom about cross-colo failover I'd love to hear those as well.
DNS based mechanisms are troublesome, even if you put low TTLs in your zone files.
The reason for this is that many applications (e.g. MSIE) maintain their own caches which ignore the TTL. Other software will do a single gethostbyname() or equivalent call and store the result until the program is restarted.
Worse still, many ISPs' recursive DNS servers are known to ignore TTLs below their own preferred minimum and impose their own higher TTLs.
Ultimately if the site is to run from both data centers without changing its IP address then you need to look at arrangements for "Multihoming" via global BGP4 route announcements.
With multihoming you need to get at least a /24 netblock of "provider independent" (aka "PI") IP address space, and then have that only be announced to the global routing table from the backup site if the main site goes offline.
As for DNS, I like to reference, "Why DNS Based Global Server Load Balancing Doesn't Work". For everything else -- use BGP.
Designing networks in order to load balance using BGP is still not an easy task and I myself certainly am not an expert on this. It's also more complex than Wikipedia can tell you but there are a couple interesting articles on the web that detail how it can be done:
Load Balancing In BGP Networks
Load Sharing in Single and Multi homed environments
There is always more if you search for BGP and load balancing. There are also a couple whitepapers on the net which describe how Akamai does their global loadbalancing (I believe it's BGP too.), which is always interesting to read and learn about.
Beyond the obvious concepts you can use software and hardware to achieve, you might also want to check with your ISP/provider/colo if they can set you up.
Also, no offense in regard to your choice of colo (Who's the provider?), but most places should be setup to deal with downtimes and so on, they should not require you to take actions. Of course floods or aliens can always strike, but in that case I guess there are more important issues. :-)
If you can, Multicast - http://en.wikipedia.org/wiki/Multicast or AnyCast - http://en.wikipedia.org/wiki/Anycast
I was wondering about the best practices regarding this? I know there are two ways to use IIS and host multiple websites.
The first is to have an IP for every website
The second is to use host headers, and a single IP Address for IIS
I was wondering which was the best practice, and why one should be preferred over the other?
Thanks!
It's easier to implement and manage SSL if each site has its own IP address/domain name. You simply get a cert for that name and install it on that site. Doing SSL with Host Headers requires a wildcard server certificate that is implemented and synchronized across all sites that share the IP. You also don't have the restriction that all the sites be in the same domain.
I personally separate sites based on the relation to each other. For example all of my business sites share a single IP adddress (1 domain currently). All of my personal/community sites share a second IP address.
The differences can come over time when it comes to sending e-mail as I know that IP comes into play in some blacklisting systems, so if one site with a shared IP address causes problems it CAN cause issues for the other sites using that IP.
I am sure there are other items, reasons, and justifications, but those are at least mine...
Personally I find host header configuration makes life very easy for standard web hosting.
I have literally hundreds of sites running of off single IP addresses on a number of servers - both IIS and *nix Apache, all configured as virtual hosts. In a live web hosting environment it makes life easier both in terms of DNS configuration and server configuration.
The only time I used IP based separation is where I want to run sites on different networks and thus serve the traffic of a different network interface.
I've not seen any performance loss with the host header methodology but would like to hear anyone's horror tales - there have to be some out there :-)
Virtual hosting is usually better than separate IP addresses, but your mileage will vary.
This is really a network vs. systems deployment connection. You want to look at the total number of sites and services you will have on a system. You might want them to live on separate network interfaces (hence multiple IP addresses). You might want them to live off bonded physical interfaces.
You might want web applications to run that need to run separately from others because of security reasons.
The other answers above mention other factors, like SSL, organizational boundaries. (Some software does make associations by IP-address, like spam control). There are probably many other factors I have not thought of.
Host headers are prefered because they conserve IPv4 address space. They have been mandatory since HTTP/1.1.
With https things are a little more complex; you need a modern browser that supports the TLS/SSL server_name extension (RFC 4366 and previously RFC 3546). This includes:
Opera 8.0 or later
Firefox 2.0 or later
IE 7 on Vista
Google Chrome
Of course your server has to support it. If you want to support earlier browsers and use SSL/TLS, you need to us an IP address per virtual host; as those browsers become obsolete you'll be about to share IP addresses for TLS/SSL.
Host Headers versus multiple IPs when hosting several websites