Analysis: 1,000 New Web Domains
ICANN recently decided to end most restrictions on the allowed generic top-level domain (gTLD) suffixes from the 22 currently available extensions (such as .com, .gov, .edu, etc). Companies and organizations will be able to choose essentially arbitrary suffixes for their internet domain names. ICANN is the organization responsible for managing the Internet protocol address spaces. Our Global Correspondent Jose Cervera analyzes ICANN's very important decision and its implications.
The nature of the Internet makes it almost impossible to control the medium. Anyone who wants to can connect a computer capable of ‘speaking Web’ and go live on the Internet; while any other person connected to the web can view its contents. Computer languages, technically called protocols, are open and do not belong to anyone.
The technology that forms the guts of the network is open. That is why anyone can publish whatever they wish, and why it's so hard to avoid the spread of information. China is spending billions on its Great Digital Wall; record companies have been unable to prevent having their music pirated; and the U.S. government has been unable to stop WikiLeaks from exposing their diplomacy to ridicule. All because the Internet is not controllable: in the jargon of strategy analysts, it lacks "choke points"— physical or conceptual checkpoints where well exerted pressure can limit or block the flow of information.
The Internet today has no 'choke points'. But it once did. And ICANN's latest initiative, enabling the creation of 1,000 new top level domains (TLD), attempts to eradicate those blocks forever. Anyone who wants to publish something on the Internet or connect to the Internet may do so. But in order to run the Web from a technical point of view, someone has to control a scarce resource. In this case, IP addresses, which are the numerical labels used to identify communicating computers.
Currently, IP addresses are composed of a 32-bit binary number that is written in four bytes, each one between 0 and 255. They are expressed as 22.214.171.124, rendering a total of 4,294,967,296 possible IP addresses (2 to the power of 32). These numbers are running out, so steps are being taken to change the current system (called IPv4) with a new one (IPv6), in which IP addresses have 128 bits and can generate a much larger number of addresses (2 to the power of 138).
IPv6 IP addresses are expressed as 2001:0123:0004:00ab:0cde:3403:0001:0063. The switch to IPv6 addresses, which has always been postponed, will end the shortage of IP addresses. But another shortage would still remain: domain names. This is because we humans on the outer edge of the Internet do not like having to type a number like 126.96.36.199 to reach a website, much less one such as 2001:0123:0004:00ab: 0cde:3403:0001:0063. To avoid this issue, the Domain Name System (DNS) was invented to translate IP addresses to labels that we can understand – called domains – such as portada-online.com, for example.
This gave the Internet a wonderful choke point, because someone has to connect domain names with IP addresses in order to establish a link to translate the address. In most national-level domains, this task has typically been carried out by a local government entity. For generic domains such as .com, .net, and .org, that task was historically left to Network Solutions, the American company charged by the U.S. government with the job.
One single company and a scarce resource (especially domain names ending in .com, which are desired by all) created a natural choke point. The result was a fight for control of the DNS and countless problems, especially with regard to domain names for trade names (the so-called 'domain hijacking'). The situation became tense, and the U.S. government attempted to seize control of the Internet, leading the community of technologists who created the Web to respond. The solution arrived at was Solomonic: a new entity called ICANN was created, which would be responsible from then on for the technological management of the Web, including the DNS, among other things.
And since 1998, ICANN has been doing just that. Throughout all these years, one of its strategic objectives has been to eliminate bottlenecks in the net’s structure to make it less vulnerable to external control. To do this, it has increased the number of so-called root servers, which are the computers responsible for managing the DNS for the Top-Level Domains (gTLDs)— which initially were very limited in number (only 13) and have now reached three hundred.
But there was still another area of scarcity that ICANN wanted to eliminate, and those were the generic domains. With only a handful of TLDs available, there was great demand for address names ending in .com (especially), .edu, .org, and .net. So the TLDs were expanded, first to seven more between 2001 and 2004 (.aero, .biz, .coop, .info, .museum, .name, and .pro), and later another group of sponsored domains was added (.asia, .cat, .jobs, .mobi, .phone, .travel, .XXX).
On June 20, ICANN approved the release of all gTLDs, which means that any interested company or group can acquire a top-level domain by paying a registration fee, plus an annual fee. Experts predict this will create between 500 to 1,000 new domains, mostly from commercial companies seeking greater control over their brand names on the Web. Soon there will be new namespaces available everywhere, which will create more confusion, but will also remove constraints. With fewer bottlenecks, the Internet will be less controllable and freer, which is a good thing for all.
Trackback from your site.