🌐
Images Videos Blog News About Series πŸ—ΊοΈ
❓
πŸ”‘

Maybe Prince was right about the Internet πŸ”—
1439057907  

🏷️ blog

There's a problem with our current DNS (Domain Name System). It is somewhat related to the current crisis, that of "running out" of ipv4 addresses, but only marginally. The solution to that crisis (a larger address space, called IPv6), will not remedy the issue which concerns me. Though not switching would only encourage monopoly, it is still not as an important issue in the long run as the centralized nature of such addressing itself.

Our current addressing system has a one-to-one mapping to a system which everyone is already familiar with (the postal system). Consider the following:

SystemThing AddressedDistribution of payloadsIssuance of addresses
Parcel Postreal propertyUSPS, DHL, UPSPostmaster General
InternetserversName ServersThe IANA

It does not take a genius to realize that such a centralized system (it is a traditional hierarchy) is susceptible to abuse by those at the top. For a system to be truly fault tolerant, it must be incapable of being decapitated and incapable of being abused and distorted by the head. Our current DNS system is somewhat resistant to decapitation, being as it has 13 "heads" in the form of the DNS root servers. However, it has been repeatedly abused by the owners to censor name-servers (via removing from the listing on a root), and to seize IP addresses and aliases (domain names). Furthermore, there are many who simply "squat" on domains, and sell them in a manner similar to scalping tickets.

In order to overcome these weaknesses, some have set up alternative root zones. These are traditionally known as "darknets", as they are not concurrently viewable by a single observer. This is obviously a sub-optimal solution, as now we have a lot of hierarchical systems with a party that could abuse it's authority. We could replace the authority with something that is not likely to be abused (such as a force of nature), but then we still need some party to provide routing data. An example would be something like an extremely precise usage of longitude and latitude, coupled with a wireless mode of communication for all parties. However, this solution gains vulnerability to disruption (decapitation), and is thus suboptimal.

So what is the solution? A non-hierarchical (peer-to-peer) system. Some would lambaste such as being tantamount to anarchy (which it is), whilst forgetting that the nature of life employs an identical system. There can be no two identical beings alive, even when cloned, they hold differing state. Such is also true of computer hardware. Thankfully, there is such a system; but it requires us to look back in time, and consider why in particular it did not win out.

NetBIOS over TCP/IP (NBT) was a simple way for small TCP/IP networks to set up host names amongst themselves, so that you don't have to remember IP addresses. This is still used on most home and small business networks, and primarily for this very good reason: Host names do not require a centralized issuing authority. The reason DNS won out over this also fairly simple. The mechanism by which NBT verifies that there are no duplicate hostnames is by shooting out a broadcast packet asking them to identify themselves. It does not take a PhD to realize this, while easy from a computational perspective, is quite intensive on the network, and could require you to have a truly enormous pipe in to handle all the responses on the wider internet.

There are ways, however, that this can scale nearly as large as one wants it to. The first tweak to make would be to allow (but not require) hosts to control domains of other hosts (which in todays parlance would be called sub-domains). This would be fairly easy for ISPs and corporations to roll out, as it is roughly analogous to how they currently do things. Many ISPs and simply have one static IP address, and then use NAT to funnel it through their gateway. Many corporations accomplish the same thing by having all sub-hosts proxy through a few gateways. These techniques would reduce the overall number of hosts exposed to the greater internet. It is worth noting that even DNS does this.

Next, we could borrow another technique from the domain name system. We could use powerful servers with the large amounts of bandwidth needed to handle broadcast responses, and to cache routes to known hosts. The traffic could be further kept down by only doing the broadcast by these giants once or twice a day.

Considering that these techniques are needed to make DNS work in the first place, I suspect that NBT would work just fine when using these techniques. So, why did it lose out, and DNS win? Well, it is because it's greatest strength (no need to get permission from a central switching station for hostnames) is also it's greatest weakness. Again, like life, when a system goes down, it's name dies with it; subsequently said name can now be used by anyone else wanting it.

Considering the flock of vultures that already swoop up expired domains, you can imagine how much more this would be hated by anyone who has come to rely on a brand, or a trademark. This could be extenuated by using large hostname cache servers; if they made known their discovery schedules, you could have opportunity to get back up before the next scan. But, also consider how much is charged for the most desired domain names; it may end up much cheaper to have redundant systems to achieve 100% uptime than to hold a domain (especially in the future when things are likely to become more, not less, monopolized).

So, that's why we got an internet that mirrors the Dracula-life of the corporation and governments. Because it is useful to them to be that way. But, it is nice to know that we can set up a parallel network in which free communication would be much, much harder to stamp out. Perhaps we should start work on that.

25 most recent posts older than 1439057907
Size:
Jump to:
POTZREBIE
POTZREBIE