In a country as big as Australia the speed of light becomes a limiting factor that can adversely affect customer experience. I’ll explain how.
Round-trip time is the time it takes for a network packet to travel from a local device to a remote device via a network, and for another packet to travel back from the remote device to the local device.
Various factors can affect how long it takes to transmit a packet. Large packets are limited by network speed (a 64 kilobyte packet takes ~640 milliseconds to transmit on a 1 megabit/second link). So faster links can transmit more data more quickly, naturally.
But, even with the fastest links, network latency (round-trip time) becomes an important factor when distance is involved. No packet can travel faster than the speed of light.
Consider the round-trip time at the speed of light between the following cities:
From | To | Kilometres | RTT1 |
---|---|---|---|
Melbourne | Sydney | 878 | 4ms |
Melbourne | Brisbane | 1,667 | 8ms |
Melbourne | Perth | 3,418 | 17ms |
Melbourne | Singapore2 | 6,026 | 30ms |
Melbourne | Tokyo2 | 8,144 | 41ms |
Melbourne | London2 | 16,904 | 84ms |
- 1. Round-trip time calculated as Google maps driving distance between two locations multiplied by 2 divided by 200,000km/s
- 2. Distance used in calculation is great circle distance between two locations
Those round-trip times are idealised best cases and real-world experience will be considerably worse than that taking into account all the routers a packet must traverse, congestion, transmission speed, and more.
Consider the scenario where you are loading a web page hosted in Perth on a computer in Melbourne. Ignoring 3-way TCP handshake and transmission times a typical modern complex JavaScript-based webpage with separate round-trip requests for 50 URLs would require:
50 x 17ms = 850ms
That’s not too bad, under a second, and a typical web browser will use concurrency to fetch those requests 4 at a time bringing that number down to ~212ms.
But what if that same website was in London? Then you’re looking at a minimum round-trip time (including concurrent downloading) of 1.0s.
Some websites get considerably more complex. For example, when this article was composed, The Age’s website required a browser to make 227 requests.
For the most part websites can often “get away” with many individual requests because they tend to be oriented towards local users or use a global content delivery network (CDN) to cache content closer to the user; in addition browsers cache a lot of unchanging content themselves such as style sheets and JavaScript code. But customers of localised sites on holidays overseas can still suffer when trying to access their accounts.
There are situations where a client program, however, does not conduct requests in parallel (no parallelism). I’ve seen this with the CORBA protocol which, because of its nature, requires request-response pairs to be serialised.
If you have a CORBA server, and expect to use a CORBA application client, and a login process makes 1,000 odd requests, then even if the locations involved are Melbourne to Sydney (4ms round-trip) then you’re looking at an absolute minimum of that process taking 4s. If that same process is used between Melbourne and Perth then the absolute theoretical minimum time would be 17s – not even taking into account other factors.
Latency has a serious impact on any non-trivial communication process and always has to be considered when delays present themselves.