receive all public hostnames in a bind (linux) server - linux

I need to save all public hostnames, that usually can be detected with a dns query, in my dns server (bind9 in a kubuntu distribution), and after I need to open this list to elaborate it in a c++ program.
How is it possible to do this saving operation? Thanks a lot!

You can use host or dig command to run axfr query and redirect output to file:
host -t axfr yourdomain.com > records.txt
or
dig yourdomain.com axfr > records.txt
You can do this directly in dns server or any other host that has Bind's permission to do so.
Note that you have to have tcp port 53 open to your dns server if you use external host to run query.

Related

How to Connect Externally hosted website with AWS CLoudfront CDN

I am hosting my site on Vultr and I want to connect it to CLoudfront CDN. How to do this? I have tried but it shows error that origin connectivity issue.
You see, this is a very specific situation and Vultr does not have the same integration with Cloudfront as it does with Cloudflare. For this I had to do the following:
First:
Release the cloud front IPs on the server's firewall, as the cloudfront has 135 IPs and Vultr's firewall panel can only register 50 entries, so transfer this responsibility to the server.
Create a script that only adds Cloudfront IPs to UFW.
I got this repo: https://github.com/Paul-Reed/cloudflare-ufw
So I have this in CRON:
0 0 * * 1 /usr/local/bin/cloudflare-ufw > /dev/null 2>&1
And for my case the script looked like this:
#!/bin/sh
curl -s https://www.cloudflare.com/ips-v4 -o /tmp/cf_ips
curl -s https://www.cloudflare.com/ips-v6 >> /tmp/cf_ips
# Allow all traffic from Cloudflare IPs (no port restrictions)
#to cfip in `cat /tmp/cf_ips`; ufw enable tcp proto of $cfip comment 'Cloudflare IP'; done
ufw reload > /dev/null
OTHER EXAMPLES OF RULES
Restrict to port 80
to cfip in `cat /tmp/cf_ips`; ufw allows proto tcp from $cfip to any port 80 comment 'Cloudflare IP'; done
Restrict to ports 22 and 443
for cfip in `cat /tmp/cf_ips`; ufw allows proto tcp from $cfip to any port 22443 comment 'Cloudflare IP'; done
Restrict to ports 80 and 443
to cfip in `cat /tmp/cf_ips`; ufw allows proto tcp from $cfip to any port 80.443 comment 'Cloudflare IP'; done
ufw reload > /dev/null
Second:
I configured cloudfront, my case was specific for wordpress traffic. followed the following steps:
I created an AWS Certificate Manager public certificate
As per documents on AWS: https://docs.aws.amazon.com/pt_br/acm/latest/userguide/gs-acm-request-public.html#request-public-console
I created the distribution on CloudFront: https://docs.aws.amazon.com/pt_br/AmazonCloudFront/latest/DeveloperGuide/distribution-web-creating.html
The distribution will be responsible for the security and performance of the application.
I created a certificate for the origin server: https://www.gocache.com.br/seguranca/como-gerar-certificado-ssl-via-terminal-certbot-com-wildcard/
It is necessary to install a valid SSL certificate inside your server to make a secure connection with CloudFront. I recommend Let’s Encrypt as a free solution for generating certificates.
I registered the record in the DNS table: https://docs.aws.amazon.com/pt_br/Route53/latest/DeveloperGuide/routing-to-cloudfront-distribution.html
For the distribution to be accessible by the website address, it is necessary to register the address in the DNS table.
The record is a CNAME and its value is a distribution domain name. You can find this information in the Details section on the CloudFront Distribution General tab.

How do I make dig use a source IP other than localhost while querying a DNS server running locally on my machine?

I am trying to run a coredns plugin https://github.com/coredns/demo that returns 1.1.1.1 for 172.0.0.0/8 or 127.0.0.0/8 and 8.8.8.8 for everything else.
I run the binary and try to make a request from dig using dig example.org #localhost -p1053 +short which returns 1.1.1.1 since the request is sent from localhost
Is there anyway I can send a request from dig to coredns that it might look like to the DNS server that it is sent from another IP and it will return 8.8.8.8 instead?
From dig manual:
-b address[#port]
Set the source IP address of the query. The address must be a valid address on one of the host's network interfaces, or "0.0.0.0" or "::". An optional port may be
specified by appending "#<port>"
Otherwise, if the server supports ECS (EDNS Client Subnet) you can use dig option +subnet=addr to give it to the server and see how its reply changes.

cPanel Server Incorrect URL Resolve

My cPanel server is resolving a URL wrong. The website example.com is hosted on my cPanel server at ip 1.0.0.1. In a script I am attempting a cURL command to cp.example.com which is hosted on another server at 2.0.0.2. My server is resolving cp.example.com to the IP of 1.0.0.1. Any help will be greatly appreciated!
It seems like your dns settings for cp.example.com are not visible on the host where you are running your script. You should check the dns settings for cp.example.com. You may also want to contact the Cpanel support
When you make a cURL request from a source hosted on your cPanel server the IP for the domain is first resolved locally, if it's not found in your Server's DNS zones it will be resolved from your configuration at /etc/resolv.conf
You can test to see which IP your server is resolving this by logging via SSH and pinging it
Executed from your cPanel Server
ping cp.example.com
I can think of two workarounds for this issue:
If example.com's DNS zone is hosted in your cPanel account
Go to cPanel -> Zone Editor
Open the DNS zone for example.com
Find the A record for cp.example.com
Change it to 2.0.0.2
If you have root - edit your WHM / cPanel Server's /etc/hosts file
root#server #: vim /etc/hosts
// 2.0.0.2 cp.example.com

host doing unnecessary dns lookup for localhost

I have a centOS system(embedded and has very binaries) with the following /etc/hosts.
$cat /etc/hosts
127.0.0.1 localhost localhost
Also the host is assigned a DNS server which returns some invalid IP for the domain name lookup of localhost. But I cannot avoid a connection to this DNS due to some network restrictions.
My question is, when I already have a valid /etc/hosts file why is the system querying the DNS for localhost? And how can I stop that?
Any help would be greatly appreciated.
Check that you have files listed before dns for the hosts entry in /etc/nsswitch.conf.
[me#home]$ grep "^hosts" /etc/nsswitch.conf
hosts: files dns
If dns comes first, then your system will always query DNS to resolve hostnames before falling back to /etc/hosts.

Assigning a domain name to localhost for development environment

I am building a website and would not like to reconfigure the website from pointing to http://127.0.0.1 to http://www.example.com. Furthermore, the certificate that I am using is of course made with the proper domain name of www.example.com but my test environment makes calls to 127.0.0.1 which makes the security not work properly.
What I currently want to do is configure my development environment to assign the domain name www.example.com to 127.0.0.1 so that all http://www.example.com/xyz is routed to http://127.0.0.1:8000/xyz and https://www.example.com/xyz is routed to https://127.0.0.1:8080/xyz.
I am not using Apache. I am currently using node.js as my web server and my development environment is in Mac OS X Lion.
If you edit your etc/hosts file you can assign an arbitrary host name to be set to 127.0.0.1.
Open up /etc/hosts in your favorite text editor and add this line:
127.0.0.1 www.example.com
Unsure of how to avoid specifying the port in the HTTP requests you make to example.com, but if you must avoid specifying that at the request level, you could run nodejs as root to make it listen on port 80.
Edit: After editing /etc/hosts, you may already have the DNS request for that domain cached. You can clear the cached entry by running this on the command line.
dscacheutil -flushcache

Resources