Why do browsers not use SRV records? - browser

Why do browsers not use SRV records?
It seems like a minimal amount of work and it will make the server-side implementation of reliable websites much simpler.
For example, you can specify tiers, such that www.example.com resolves to 1.2.3.4 and 2.3.4.5, and only if neither of those are available, try 4.5.6.7.
SRV records have been around for years...
Is there something I'm missing here?

The RFC for SRV records specifies that it may not be used by pre-existing protocols which did not already specify the use of SRV records in their specifications. I.e. no SRV in the HTTP spec - browsers are, by the SRV standard, prohibited from using it.
This does not prohibit a new HTTP 1.2 standard from specifying the use of SRV records, though. However, Mark Andrews proposed this in April 2007 to the IETF HTTP working group, but got no response.

There have been two efforts to introduce this that I know of:
draft-andrews-http-srv (2002)
draft-jennings-http-srv (2009)
The "Open Issues" paragraph of the latter draft is illuminating:
The big open issue seems to be if one should just update the HTTP
scheme to do this SRV lookup and not create a new scheme. The 00
version of this draft did that. A new scheme makes this somewhat
unusable for general web surfing while using the old scheme results
in a very long transition times where different clients resolve URLs
in different ways.
and that is the crux of the matter. If your site relies on SRV records to be found, it won't work for some users until every browser supports it.
Would you take that risk, without some sort of transition mechanism?

Jonathan de Boyne Pollard provides the following Frequently Given Answer.
You've come to this page because you've said something similar to the following:
SRV record support hasn't even made it into web browsers yet, let alone clients of less-common protocols.
This is the Frequently Given Answer to such statements.

Because:
The current HTTP RFC does not specify a symbolic service name for use in SRV records and does not specify that SRV records should be used (cf. RFC 2782, Applicability Statement).
It may negatively impact the latency in browsers and browser vendors want to first see it standardized for http by the IETF (cf. chromium bug report)
It may be kind of complex to integrate it into existing browsers (cf. firefox bug report)
Vendors don't want to say why (cf. webkit bug report)
The latest draft for adding SRV records to HTTP is andrews-http-srv-02 from 2014 which includes security and transitional considerations. It is more complete than the jennings-http-srv-05 draft from 2009. For example, it specifies a security relevant algorithm for choosing the port when it is given in the URL and there is a SRV record (which also includes a port field) - where the jennings draft does not look into this issue.

I was hoping they would standardize SRV for years, but no luck.
For most, this would be essential, only scalability outweighs the disadvantages, everything they say about speed and compatibility is just a bad excuse. If the server wants SRV records to be analyzed and applied, why not provide this option to users?
About compatibility and other issues - we live in the era of DoH, DoQ, DoT, which are not super-compatible, fast, but very useful, forge metal when it's hot, find no excuses, just do it.

Related

Check if Domain is registered or not without Whois?

To PROGRAMMATICALLY verify if a domain exists I do the following:
DNS Query it and see if it resolves. If it does, it's obviously registered. So no need for step 2. If it doesn't, it might STILL be registered. So a whois check is required.
Backtrack from whois.iana.org and see if the designated whois server knows the domain or not.
Well, whois is not really meant for bulk checking. Not to mention that the RFC has only 4 pages and there's no clear specifications as to the format or even the encoding of the data. So you pretty much have to train the parser for each specific answer format (server).
Is there a way to circumvent the whois query and check (as close to the metal as possible) if the domain is registered in another (publicly available) standardized (preferably free or affordable) way? And not by downloading the TLD zone file or using third-party APIs (as they have a bad habit of snatching domains that you check before you get to register them). :)
I know registrars have their own protocol but I'm not sure if it's open to public use.
There isn't really any good way to do this accurately without looking at zone files or checking directly with the registry, unfortunately.
Registrars typically use a protocol like EPP to talk to a registry, check name availability and place orders. It's unlikely that anyone other than an accredited registrar would be permitted to use this protocol, but it may be worth checking with the registry that manages the TLDs you are interested in, e.g. Verisign.
I'd (personally) be wary of relying too much on DNS queries or WHOIS lookups to ascertain whether a particular domain exists, as both can produce inaccurate results from time to time. For example, certain TLDs have name servers configured for any unregistered domain name (they often direct you to the registry's website). The Vietnamese registry is one example of this. WHOIS lookups can fail for any number of reasons, so lack of a record is not concrete evidence of the domain's availability.

Dynamic DNS references

Could you please cite references/books for more details about Dynamic DNS? I've already tried Wikipedia, IEEE papers and RFCs for all those people rolling their eyes reading this. So please, any inputs are welcome. I need help implementing it in a project and would love to know more about it. Thanks.
Dynamic DNS is a concept of updating DNS records on-the-fly, as opposed to normal (static) DNS where change in a DNS records required manual intervention.
Dynamic DNS means that you have some DNS server, and you may programmatically update records on it. This can be achieved in different ways:
RFC 2136 dynamic DNS. It's an extension to good ol' DNS protocol which allows not obtaining DNS records, but updating them. Most DNS servers today (for example BIND9 and PowerDNS) support this protocol. Documentation sources: RFC 2136 defines the protocol. nsupdate is the command line tool which supports this protocol, read man nsupdate. For details on how to configure BIND9 for dynamic updates, refer to the BIND9 ARM. Libraries exist for most languages which allow Dynamic DNS updates using this protocol. For PHP for example, it's Net_DNS2. It's not well documented, but sites have nice examples which easily allowed me to use it.
Some DNS servers (especially PowerDNS) can read their DNS records from database back-end. Thus it makes possible to write new DNS records into normal SQL database, and server just takes them from there. Documentation sources: If you choose this way I very suggest to use PowerDNS, look for documentation on PowerDNS site.
If updates are not frequent, it's also possible to update text zone files on the DNS server and then request server to re-read the updated zone files. Though this is probably not a convenient way. All major DNS servers support same zone file format, I find DNS for Rocket Scientists excellent.
Now, there's a completely different side of dynamic DNS is a dynamic DNS services like to no-ip.com, my own net-me.net and many others. They all expose some HTTP-based API (usually very simple) to update DNS records, and often provide a GUI client software which actually updates them. Quick overview on the update protocol, the client and the whole process you can get here. As no standard exists, every provider uses his own variation of the protocol, usually they all look quite similar. (All these Dynamic DNS providers internally use some sort of 1.2.3. described above.)
The last but not least - there's a great Oreilly book - DNS and BIND 5th edition which covers all possible aspects of DNS.

Testing a DNS server

I'm implementing a DNS server and I wonder if there's any tool, preferably online, that I can use to test that I've implemented various features right. A tool that I could use to make various requests to the DNS server and test that it follows the rfc 1035 specification. Are there any "reference test cases" or something like that? Or are people who implement protocolls supposed to just read the english natural language documents and just trust they don't do any human mistakes while reading it? Wouldn't a standard be stronger if it had test cases and not just a description? Anyway, I digress, How to test a DNS server so it complies with the standard, please?
Zonecheck is probably the tool you're looking for:
http://www.zonecheck.fr
http://www.zonecheck.fr/demo/
It's open source, written in Ruby officially used by the French registry for .fr domains.
The difficulies in devising a generic test suite for DNS servers are twofold:
recursive servers need much more functionality than authoritative servers
standard tests need a standardised set of test data
The latter is probably the largest problem - you'd have to find a way to load up your DNS server with all of the data that the test suite expects.

A bare-bones DNS server

I have to implement a DNS server in C and I don't know where to start. What are all the features that a DNS has...how can I implement a bare-bones DNS in single C file.
I don't even want to use a Database, just a file will work.
Thank you in advance
That's big for homework! Your teacher is ambitious. Implementing DNS
requires reading at least ten complicated RFC (not mentioning DNSSEC...) Do
not limit yourself to RFC 1034 and 1035, there are mandatory
RFC after (such as 2181 and 2671). See a nice graph of them.
Is it an authoritative name server or a recursive one?
Do you have to do it from scratch? If not, I strongly suggest to start
with the evldns library, which allows you to write an
anthoritative name server in 200 lines of C.
Otherwise, the usual advice applies: read source code (I suggest
nsd for an authoritative server and unbound for a recursive
one).
DNS is a big spec. If you really want DNS, use a DNS server. So if you want something really quick and dirty, why not just write a program that edits your hosts file (C:\windows\system32\drivers\etc\hosts or /etc/hosts (on UNIX)?)
dns.net points up RFC 1034: DOMAIN NAMES - CONCEPTS AND FACILITIES and RFC 1035: DOMAIN NAMES - IMPLEMENTATION AND SPECIFICATION as the definitive references.
As a topical plus, wow your teacher by including some non-ascii IDN names in your toy lookup list.
The RFCs that the protocol is based on can be found here: http://www.zoneedit.com/doc/rfc/
There are also several explanations of the protocol that should be useful to be found around the internet, such as this one: http://www.windowsnetworking.com/articles_tutorials/Understanding-DNS-Protocol-Part1.html
This should get you started.
This example uses BSD sockets to build a simple DNS resolver.
http://www.binarytides.com/blog/dns-query-code-in-c-with-winsock-and-linux-sockets/

Is it good practice to hide web server information in HTTP headers?

This question is more security related than programming related, sorry if it shouldn't be here.
I'm currently developing a web application and I'm curious as to why most websites don't mind displaying their exact server configuration in HTTP headers, like versions of Apache and PHP, with complete "mod_perl, mod_python, ..." listing and so on.
From a security point of view, I'd prefer that it would be impossible to find out if I'm running PHP on Apache, ASP.NET on IIS or even Rails on Lighttpd.
Obviously "obscurity is not security" but should I be worried at all that visitors know what version of Apache and PHP my server is running ? Is it good practice or totally unnecessary to hide this information ?
Prevailing wisdom is to remove the server ID and the version; better yet, change them to another legitimate server ID and version - that way the attacker goes off trying IIS vulnerabilities against Apache or something like that. Might as well mislead the attacker.
But honestly, there are so many other clues to go by, I wonder about whether this is worth it. I suppose it could stop attackers using a search engine to find servers with known vulnerabilities.
(Personally, I don't bother on my HTTP server, but it's written in Java and much less vulnerable to the typical kinds of attack.)
I think you usually see those headers because the systems send them by default.
I routinely remove them as they provide no real value and could, as you suggested reveal information about the server.
Hiding the information in the headers usually just slows down the lazy and ignorant villains. There are many ways to fingerprint a system.
Running nmap -O -sV against an IP will give you the OS and service versions with a fairly high degree of accuracy. The only extra info you're giving away by having your server advertise that information is which modules you have loaded.
It seems that some of the answers are missing an obvious advantage of turning off the headers.
Yes, you all are right; turning of the headers (and the statusline present e.g. at directory listings) does not stop an attacker from finding out what software you use.
However, turning this information off prevents malware which uses google to look for vulnerable systems from finding you.
tldr: Don't use it as a (or even as THE) security-measure, but as a measure to drive away unwanted traffic.
I normally turn off Apache's long header version information with ServerTokens; it adds nothing useful.
One point which nobody has picked up on, is it looks like better security to a prospective client, pen testing company etc, if you're giving out less information from your web server.
So giving less information out boosts the perceived security (i.e. it shows you have actually thought about it and done something)

Resources