The
93.1 per cent of people in our region that use the Internet are mostly very
excited about the current roll-out of the NBN (National Broadband Network).
With promises of cheaper connections that are more reliable it does sound
fantastic. Speed is obviously a major attraction as well and this is the area I
want to focus on. You will regularly see promises of download speeds of 100Mbps
available with Fibre to the Premises (FTTP) NBN. What does this number actually
mean? Most people in our area have been using a form of ADSL which has offered
maximum theoretical download speeds of 8Mbps or 20Mbps. 100 is a bigger number
than 20 so this NBN thing must be a good idea!
The
Internet is a complicated beast. There are 75 million servers across the world
that deliver the data that we request. How these servers are connected to the
Internet is critical to the experience we all have as Internet users.
When
we see speeds quoted as 100Mbps that is really just a measurement of the
connection speed between our local exchange and the router in our home or
business. The NBN speed means that suddenly the speed of delivery to our
premises has increased by a factor of five to fifteen. That is not per cent –
that is a multiple. In some areas of Dubbo, real-world download speeds have
increased by over 30 times compared to the previous speeds.
This
places increasing pressure on other areas of Internet infrastructure. Having
speeds of 100Mbps from your Exchange to your house is a waste of time if the
server farm you want to access information from has an upload speed that is not
capable of delivering this. The world seems to like using water analogies with
the Internet so I won’t stray from the concept. Imagine that you have a 20mm
water pipe from the water tower to your house. If you install a 100mm water
pipe from your water meter to your shower, it will not allow a sudden increase
in the flow of water when you are belting out your morning tunes in the shower.
The bottleneck in this case is the water mains feeding the actual house.
What
is more common in the Internet world is the concept that not all users on a
link are using the Internet to its maximum at the same time. To stay with the
water example, imagine a 100mm water mains from a water tower that was used to
service 50 houses with each house having a 20mm water pipe from the water meter
into the house. If only one house turned on the shower, that person would
receive the full volume of water as there would be the available capacity of
100mm flowing into a 20mm pipe. Now imagine everyone having a shower at 7.30am.
The 50 houses would be sharing 100mm of water mains meaning that each potential
Pavarotti would receive a dribble of only an effective 2mm water pipe.
The
Internet works in a similar way and you will often hear contention rates quoted
of 20:1 right up to 50:1 and higher. Contention rates are also often called
oversubscription rates. The logic is that the infrastructure does not need to
be built to the level that every connected premises is using the full potential
of their link all the time. A 20:1 contention rate means that if every
connection was at its maximum capacity, it would be 20 times the capacity
available. To keep infrastructure costs at a reasonable level, this is sensible
but you really see the impact of this when school finishes and students come
home and jump straight on their computers – obviously to do their homework. The
4pm slowdown is famous across the nation and comes down to those contention
rates. At this time of day, the average number of computers using all of the
potential bandwidth increases dramatically and hence the theoretical capacity
is not able to be delivered.
Going
one step deeper into the world of Internet infrastructure, if you assume your
connection between the Exchange and your house is able to deliver to your
theoretical speed, consider the connection of the server farm you are trying to
access. Bandwidth capabilities at this level also have a contention rate but
there are times when demand outstrips the potential capabilities. When you hear
of a certain trend or activity ‘crashing’ a Web site, it is normally a
reference to the fact that the bandwidth capabilities of that server have been
exceeded and the information is just not able to be accessed quickly enough.
The
Internet is a global network. That means that we don’t just look at Web sites
or access data from within Australia. In fact, we don’t often think about where
our data is coming from – we just want to access it. Dragging data between
continents is the current major challenge which faster end-user connections is
presenting. Satellites may initially seem like a good idea, but the fact that
geostationary satellites orbit at 35,786km above the earth results in very poor
latency. This latency is simply unacceptable by the Internet community. The
solution is an incredible network of undersea (or submarine) cables.
Many
years ago Google saw that this was a major potential issue for their business
model and started investing in undersea cables. In 2010 the trans-Pacific Unity
cable was Google’s first investment and it had a capacity of 7.68Tbps (Terabits
per second). This was massive capacity at the time but, with demand continuing
to grow at incredible rates (41 per cent in Australia last year) the Faster
Consortium – of which Google is the major player – has just announced their
latest undersea cable. The link between the US and Japan is the highest
capacity undersea cable built to date with 60Tbps. That will soon be dwarfed
though with a joint announcement by Microsoft and Facebook that they are about
to start construction of ‘Marea’ which will offer speeds of 160Tbps.
All
of this is being built so you don’t have to wait too long when you go hunting
for Pikachu when playing Pokémon Go!
Mathew Dickerson