john
john

Reputation: 741

DNS second-level domain search efficiency

How efficient is DNS second-level domain lookup? For example, in a url web.example.com, the top-level domain is .com, and the second level domain is .example. When we query for web.example.com, the root servers will provide the gTLD servers for .com. After a gTLD server is selected for .com, this server will return the nameservers for example.com. How can one gTLD know the nameservers for all each possible second-level domain (such as "example.com") since there could be so many possible second-level domains? Based on wikipedia (en.wikipedia.org/wiki/Domain_Name_System#Structure) each label is up to 63 characters, so if we limit to the english alphabet, this already gives us 26^63 possible second-level domains.

Upvotes: 0

Views: 389

Answers (2)

Alnitak
Alnitak

Reputation: 340055

Most likely (I haven't checked, but will ask when I see the main author of BIND next) they just use a standard binary tree.

A properly balanced binary tree would need to be about 27 levels deep to hold the ~100M .com domain names.

It's unlikely to use a hash table since DNS servers typically need to be able to produce a sorted zone file on demand, and hash table's aren't that amenable to producing a sorted list of the keys.

Upvotes: 0

Joachim Isaksson
Joachim Isaksson

Reputation: 181097

The reason is that very very few of the 26^63 domains are actually used.

Also, the DNS system is hierarchical, so once a DNS server at an ISP looked up for example cnn.com, it will cache the data and not ask the root server about it again for a set time even if other clients ask about it. After a while, many root domains are cached very near to the clients.

That is not to say that the root servers don't have their work cut out for them... :-)

Upvotes: 1

Related Questions