By Keith Buckley, regional VP, Australia and New Zealand at Riverbed Technology. One hopes that with the promise of a New Year ahead we’ll have learned significant lessons from the 12 months prior. But sometimes lessons, no matter how well-documented or important they seem, aren’t learned by everyone and doom us to repeat the mistakes of the past. Case in point: you’d think after the Click Frenzy Fail back in 2012 that we’d have learned our lessons about expecting mass traffic on the net
work in times of online retail fervour. And while Click Frenzy has run extremely well since its opening pains, many sites are still struggling with increased – albeit surely expected – network demands.
With Australian gamers gripped by 8-bit nostalgia, late last year the website for a video games retailer went down when thousands of gamers visited the site looking to pre-order the much-hyped Mini NES – the retro mini-replica of the classic Nintendo Entertainment System. The flurry led to frustration among customers repeatedly asked to refresh their pages, which fans were no doubt doing en masse with the company advising only an “extremely limited” supply of the consoles was available for pre-order.
The retailer released a statement on Twitter, which said: “Despite juicing up our servers, our website just couldn’t cope with the record traffic of tens of thousands of enthusiastic gamers. We were running 45 servers, each with 32 CPUs, for a total of 1440 CPUs handling the website.
“On a normal Tuesday, we have about 500,000 page views. Yesterday we hit over 7,500,000.”
The upgrades to the servers weren’t enough in the end, although it wouldn’t be fair to single out one retailer as this is far from an isolated incident. Towards the end of last year, many Adele fans across the country were left waiting for the websites of two online ticketing providers to refresh to allow them to purchase tickets for her first Australian tour, leaving frustrated and anxious consumers to voice their displeasure on social media.
Are these two examples signs that we’re still working out the kinks in managing traffic, even when mass gridlock is to be expected?
We’d all like to hope it’s the former, because there are no signs that online sales are set to slow. According to the National Australia Bank’s (NAB) latest Online Retail Sales Index report, online spending increased 14.2 per cent over the 12 months to September 2016. And it’s not just the young digital natives shopping online: research commissioned by the NBN found that 59 per cent of grandparents surveyed have used the internet for shopping.
With that kind of traffic expected, retailers need to be prepared for the worst-case scenario when it comes to online sales. Expecting the worst means ramping up the network capabilities and capacity to cater for current and future demand – and for the random spike that well exceeds even the most bullish expectations.
Often, website failures are a result of several network performance issues, such as poor change management, denial of service attacks, or simply lack of capacity on the hosting platform. Click Frenzy realised this, and went with a more robust hosting service to help it stay online during the height of subsequent ‘frenzies’ post-2012.
But not every retailer can so easily switch hosting services. For those retailers, the IT department should regularly test the network and all the applications that connect to it, and establish a real-time view of the entire network so they can quickly identify and fix the causes of outages or slowdowns. If a company doesn’t have the monitoring and diagnostics in place to detect where the issues are, it can take much longer to resolve and get the website back online.
Retailers need tools that can expose and diagnose bottlenecks before their users and customers feel the impact. This is especially important for companies with a hybrid IT infrastructure whereby applications are deployed in multiple data centres and accessed from various locations, increasing the complications around performance management.
Speed, reliability and ease of use are major priorities for shoppers. Keep in mind, too, that much of the traffic to retail e-commerce sites is likely to come from mobile devices, so the development of robust, sophisticated mobile apps is paramount.
But a lot of the issue comes down to a need for retailers to expect the unexpected. The website crash for the aforementioned games retailer, while small in the grand scheme of the online shopping world, shows that we can’t simply go off trend lines. It saw, per its statement, a fifteen-fold increase in its standard Tuesday traffic and while it did everything it could within its power to ramp up capability on its network, it wasn’t enough.
We need to expect gridlock when it comes to online traffic and go from there. Retailers cannot afford to simply expect a steady rise in traffic. They need to expect dramatic, unprecedented spikes, particularly when the marketing department launches new campaigns or a new, highly anticipated product hits the virtual shelves. The retailers that expect the unexpected will be ready in advance to re-allocate bandwidth and prepare additional servers as necessary.
The ones that haven’t made network health a priority, need to. The upward trend to online shopping is not set to cease anytime soon so the networks need to stay well ahead of that line. The unexpected spike in traffic may cripple the network and lead to an unhappy customer base, one that may be reluctant to return when they can simply click a mouse and visit a competitor. Thus, knowing what’s happening on the network is critical not only to being able to see and predict bottlenecks, but fixing the issues when they do occur so the website is back online in no time – so those customers don’t have reason to shop elsewhere.