Multiple domains for the same website - dns

Short question: What is the optimal method for routing multiple Domains for the same website? Specifically, how to route a uri with an international tld (.рф or .срб) and an ascii tld (.ru or .rs respectively)?
Long question: I have two domain names for the same website, one ascii and one international (cyrillic), http://domain.rs and http://домен.срб pointing to the same website. On the one hand, I know many websites which use both domains equally and parallelly (such as for example http://rts.rs and http://ртс.срб), but on the other, I've been advised that it's a bad practice from SEO point of view, and that instead I should have one domain redirecting to the other. Аre there any advices, or resources where I could get informed about how to handle international domains alongside with ascii ones?

Using "parallel" domains, without some kind of canonicalization in-place, will result in duplicated content issue. So, I wouldn't suggest it at all.
(There is a "loop-hole", sort of speak, that allows different TLDs to appear independently for different locals but truly this gains you nothing at all, just removes some of the DC issues...)
If I understand you correctly, the right thing to do here is to stick to one Main Domain and use 301 redirects for all others (page to page preferably). Ascii or not, is irrelevant. For Main Domain, choose you "oldest" one or/and the one with most inbound links.
In the long run this is also most practical solution as it will allow you to concentrate your link-building efforts, focusing all inbound links around one Winner instead of just spreading them around among several Mediocres.

Related

Using parent domain to query DNS SRV for sub domain

I am writing an application to query the DNS SRV record to find out an internal service for a domain obtained from the email address. Is it correct to do the following.
Lets say the email domain is test.example.com
Query SRV record _service._tcp.test.example.com
No SRV record is returned
Now query SRV record _service._tcp.example.com
A record is returned. Hence use this record to connect
Is the above approach right? Assuming its not, are there any RFCs or standards that prevents an application from doing it?
Is the above approach right?
No, it is not. You should not "climb" to the root.
There is nothing explicitly telling you not to do that in RFCs and you will even find some specifications telling you to climb to the root, see CAA specifications (but they had to be changed over the year because of some unclarity exactly around the part about climbing to the root).
Most of the time, such climbing creates more problems than solution, and it all come from "finding the administrative boundaries" which looks far more simple than what it is really.
If we go back to you example, you say, use _service._tcp.test.example.com and then _service._tcp.example.com and then I suppose you stay there, because you "obviously" know that you shouldn't go to _service._tcp.com as next step, because you "know" that example.com and com are not under the same administrative boundaries, so you shouldn't cross that limit.
Ok, yes, in that specific example (and TLD) things seem simple. But imagine an arbitrary name, let us say www.admin.santé.gouv.fr, how do you know where to stop climbing?
It is a difficult problem in all generality. Attempts were made to solve it (see IETF DBOUND working group) and failed. You have only basically two venues if you need to pursue: either find delegations (zone cuts) by DNS calls (not all delegations are new administrative boundaries, but a change of administration should mean a delegation; and obviously there is not necessarily a delegation at each dot, so you can not find all of this by just looking at the string, you need to do live DNS queries) OR using Mozilla Public Suffix List, which has a lot of drawbacks.
This is all basically a rehash of what you can read in "§4. Zone Boundaries are Invisible to Applications" of RFC5507, quoting the core part here:
The false assumption has lead to an approach called "tree climbing",
where a query that does not receive a positive response (either the
requested RRSet was missing or the name did not exist) is retried by
repeatedly stripping off the leftmost label (climbing towards the
root) until the root domain is reached. Sometimes these proposals
try to avoid the query for the root or the TLD level, but still this
approach has severe drawbacks:
[..]
o For reasons similar to those outlined in RFC 1535 [RFC1535],
querying for information in a domain outside the control of the
intended entity may lead to incorrect results and may also put
security at risk. Finding the exact policy boundary is impossible
without an explicit marker, which does not exist at present. At
best, software can detect zone boundaries (e.g., by looking for
SOA Resource Records), but some TLD registries register names
starting at the second level (e.g., CO.UK), and there are various
other "registry" types at second, third, or other level domains
that cannot be identified as such without policy knowledge
external to the DNS.
Note indeed also the example given for MX because a naive view you apply the same algorithm there, but as the RFC says:
To restate, the zone boundary is purely a boundary that exists in the
DNS for administrative purposes, and applications should be careful
not to draw unwarranted conclusions from zone boundaries. A
different way of stating this is that the DNS does not support
inheritance, e.g., an MX RRSet for a TLD will not be valid for any
subdomain of that particular TLD.
There are various examples of people having tried to climb to the root... and creating a lot of problems:
in the past, Microsoft and wpad.dat: https://news.softpedia.com/news/wpad-protocol-bug-puts-windows-users-at-risk-504443.shtml
more recently, Microsoft again about email autodiscover: https://www.zdnet.com/article/design-flaw-in-microsoft-autodiscover-abused-to-leak-windows-domain-credentials/
So, in short, without a solid understanding of DNS, please do not create anything "climbing" to the root. Do note that RFC2782 about SRV gives "Usage Rules" without a case of climbing to the root.
You are not explaining fully why you are thinking about this. I suggest you have a look at the newest HTTPS/SVCB DNS records (RFCs not published yet, but RR type codepoint assigned by IANA already, and in use by Apple, Cloudflare and Google already), as they may provide similar features set as SRV but may be more relevant for your use case.

Difference between subdomains `WWW` and `m`

What's the difference between these two subdomains WWW and m?
For instance, WWW.medium.com and m.medium.com
In my country, WWW.medium.com is blocked, but when I try using m.medium.com instead, It works well. So I want to know what's the difference between them? Can they have different content?
PS: I have searched about that, but haven't found anything related
Thanks in advance.
Yes, they can. In fact, the two domains www.medium.com and m.medium.com could point to two completely different servers. This is up to the owner of medium.com to decide.
Usually, a 'm' subdomain points to the version of the site that is intended for mobile devices, i.e. devices with a smaller screen size.
Pinging the domains, I get 162.159.153.4 for medium.com, 162.159.152.4 for www.medium.com and 162.159.152.4 for m.medium.com.

Sort domains by number of public web pages?

I'd like a list of the top 100,000 domain names sorted by the number of distinct, public web pages.
The list could look something like this
Domain Name 100,000,000 pages
Domain Name 99,000,000 pages
Domain Name 98,000,000 pages
...
I don't want to know which domains are the most popular. I want to know which domains have the highest number of distinct, publicly accessible web pages.
I wasn't able to find such a list in Google. I assume Quantcast, Google or Alexa would know, but have they published such a list?
For a given domain, e.g. yahoo.com you can google-search site:yahoo.com; at the top of the results it says "About 141,000,000 results (0.41 seconds)". This includes subdomains like www.yahoo.com, and it.yahoo.com.
Note also that some websites generate pages on the fly, so they might, in fact, have infinite "pages". A given page will be calculated when asked for, and forgotten as soon as it is sent. Each can have a link to the next page. Since many websites compose their pages on the fly, there is no real difference (except that there are infinite pages, which you can't find out unless you ask for them all).
Keep in mind a few things:
Many websites generate pages dynamically, leaving a potentially infinite number of pages.
Pages are often behind security barriers.
Very few companies are interested in announcing how much information they maintain.
Indexes go out of date as they're created.
What I would be inclined to do for specific answers is mirror the sites of interest using wget and count the pages.
wget -m --wait=9 --limit-rate=10K http://domain.test
Keep it slow, so that the company doesn't recognize you as a Denial of Service attack.
Most search engines will allow you to search their index by site, as well, though the information on result pages might be confusing for more than a rough order of magnitude and there's no way to know how much they've indexed.
I don't see where they keep or have access to the database at a glance, but down the search engine path, you might also be interested in the Seeks and YaCy search engine projects.
The only organization I can think of that might (a) have the information easily available and (b) be friendly and transparent enough to want to share it would be the folks at The Internet Archive. Since they've been archiving the web with their Wayback Machine for a long time and are big on transparency, they might be a reasonable starting point.

Is it possible to have one (single) character top level domain name?

I'm writing a Regex to validate email. The only one thing confuse me is:
Is it possible to have single character for top level domain name? (e.g.: lockevn.c)
Background: I knew top level domain name can be from 2 characters to anything (.uk, .us to .canon, .museum). I read some documents but I can't figure out does it allow 1 character or not.
It is technically possible, however, there are no single character tlds that have been accepted into the root (as of the moment) so the answer is:
Yes, it is possible to have single character for top level domain name, however, there are currently no single character TLDs in the root.
You can see the list of TLDs that are currently in the root at this URL:
http://data.iana.org/TLD/tlds-alpha-by-domain.txt
RFC-952 shows what a "name" is, this includes what is valid as a top level domain:
A "name" (Net, Host, Gateway, or Domain name) is a text string up
to 24 characters drawn from the alphabet (A-Z), digits (0-9), minus
sign (-), and period (.).
Additionally, the grammar from RFC-952 shows:
<name> ::= <let>[*[<let-or-digit-or-hyphen>]<let-or-digit>]
RFC-1123 section 2.1 specifically allowed single letter domains & subdomains, changing the initial grammar of RFC-952 from starting with just a letter to being more relaxed, so now you are allowed to have single letter top level domains that are a number:
2.1 Host Names and Numbers
The syntax of a legal Internet host name was specified in RFC-952.
One aspect of host name syntax is hereby changed: the
restriction on the first character is relaxed to allow either a
letter or a digit. Host software MUST support this more liberal
syntax.
EDIT: As per #mr.spuratic's comment, RFC-3696 section 2 tightened the rules for top level domains, stating:
There is an additional rule that essentially requires
that top-level domain names not be all-numeric.
This means that:
a. is a valid top level domain
1. is not a valid top level domain
A very unscientific test of this shows that if I add "a" into my hosts file pointing to my local machine, going to http://a in my address bar does show my Apache welcome page.
I'm not sure about the internet standard, but in practice, no.
See,
http://www.norid.no/domenenavnbaser/domreg.html
and,
http://sqa.fyicenter.com/Online_Test_Tools/Domain_Name_Format_Validator.php
You should DEFINITELY allow 1-character domains since some registries allow them not by accident (and I speak of quite big registries like UK, Germany, Poland, Ireland too - so important contributors to the Internet community, not oney exotic small exceptions). Since I also plan using such domains, that definitely work also with all e-mail services I used, letters AND numbers, I really would give the hint to allow this, else your script might need later correction.
Also some of the biggest internet companies use such domains - one of the most famous examples is Twitters t.co for shortening. Other companies I know of who have such domains are Facebook, Google, PayPal, Deutsche Telekom. But the list is longer and also some bigger investors hold them as assets.
By the way as proof there is a website trading this kind of domains online if You search for "1 letter domain names" :)

Hackproofing the site?

I don't know how to make my site hackproof at all. I have inputs where people can enter information that get published on the site. What should I filter and how?
Should I not allow script tags? (issue is, how will they put YouTube embed code on the site?)
iFrame? (People can put inappropriate sites in iFrames...)
Please let me know some ways I can prevent issues.
First of all, run the user's input through a strict XML parser.
Reject any invalid markup.
You should use a whitelist of HTML tags and attributes (in the parsed XML).
Do not allow <script> tags, <iframe>s, or style attributes.
Run all URLs (href and src attributes) through a URI parser (eg, .Net's Uri class), and ensure that the protocol is http, https, or perhaps mailto. Again, reject any invalid URLs.
If you want to allow YouTube embedding, add your own <youtube> tag that takes a URL or video ID as a parameter (content or attribute), and transform it into a script on the server (after validating the parameter).
After you finish, make sure that you're blocking everything on this giant list.
There is no such thing as hacker proof. You want to do everything you can to decrease the possibility of being hacked. The most obvious weaknesses are going to be preventing against xss (cross site scripting) hacks and sql injection attacks. There are easy ways to avoid both, most notably using newer technologies that instinctively seek to ward against them (text outputs that are encoded by default, conversions of queries before execution), etc.
If you need to go beyond those levels, there are a number of both automated (mostly fuzzy numbers you can give your sales guys after they are all "good") services that will "test" your system down to hard-core analysts that will pick apart your system for various audits.
Other than the basics mentioned above (xss & sql injection), the level of security you should try and obtain will really depend on your market.
Didn't see this mentioned explicitly, but also use fuzzers ( http://en.wikipedia.org/wiki/Fuzz_testing ).
It basically shoves random crap (strings of varying characters and length) into your input fields; It's used in industry practice bc it finds lots of bugs (ie. overflows).
http://www.fuzzing.org/ has a list of great fuzzers for you to try.
You can check a penetration testing framework like ISAAF. It give you a check list and a methodology to test important security aspects of your application.

Resources