Last month’s Cloud Expo Europe was again ‘co-located’ (see what they did there…?) with Data Centre World meaning that, as a trade fair, exhibitors collectively covered everything from fuel polishers, heat exchangers, fire suppression systems, racks, cables, fibre and microwave links for datacentre aficionados; through all the “as-a-Service” merchants (from IaaS and PaaS up to Communications (as-a-Service), Network (as-a-Service), Monitoring (as-a-Service), Disaster Recovery (as-a-Service), etc); to security and identity specialists, often seen partnering with the SaaS-based content storage and collaboration vendors which usually grace the pages of this blog.
And this range of interests was represented in the array of speakers lined up around the 11 dedicated conference streams (ranging from datacentre equipment management; through service provider and cloud ecosystem interests; to virtualisation, security, and compliance). But what I’d really come to see were some of speakers on the Keynote, and the Data Analytics / Internet of Things stages – the latter looking a little bolted-on, amongst all the cloudy this and cloudy that dominating the rest of the programme. But these days, everyone feels the need to flaunt their IoT credentials. However the presentations there didn’t disappoint, taking an interesting ‘cloud-enablement take’ on the, now, well-worn “instrumented devices will eat the world” (or at least all your bandwidth, storage, and processing capacity) warnings.
The message, from speakers like Kalman Tiboldi (Chief Business Innovation Officer at agricultural / industrial plant vehicle sales, spares, and rental company TVH), was that only the cloud can scale to accommodate the connectivity, analysis and transformation, storage, abstraction, application, and business processing required to bring physical devices and controllers at the ‘edge’ to people and processes at the ‘centre’ (according to the reference model proposed at last year’s IoT World Forum). New business models, like predictive maintenance (something TVH is keen to exploit, with a large fleet of rental forklifts, aerial work platforms and the like to manage as efficiently as possible) become much more viable when there’s device, network, and application domains within the cloud which can be used to monitor sensors in a truck’s moving parts, aggregate to understand usage patterns, and recommend the most cost-effective schedule of works to keep them on the road.
Over in the Keynote Theatre, on more generalist cloud issues, Financial Times CTO John O’Donovan won the Longest Presentation Title rosette (32 words) and regaled a packed auditorium with a talk describing a career of cloud adoption across different projects, different employers, and different clouds. Main messages were to beware of scalability ceilings on private clouds, and the residual change-request culture and Capex spending expectations that routinely dog vendors’ own clouds. On public clouds, he shared his positive experiences of working with Amazon Web Services at the FT (which I covered in a blog on last year’s AWS Enterprise Summit) – such as, being able to cheaply try out AWS’ Redshift database service first obviated the need to go for a time-consuming RFP since the FT quickly saw good results, which made the purchasing decision much more straightforward. And, immediately prior to joining the FT, O’Donovan served as the Press Association’s Director of Architecture and Development – where public cloud elasticity was an essential plank of the PA’s success in delivering digital services from the 2012 Olympics in London. So successful, in fact, that he revealed the latency from Olympic Park to API endpoints was so small that results were being made available to websites in advance of supposedly live broadcast footage reaching people’s televisions… meaning that a delay had to be inserted so as not to spoil endings for twin-screening viewers with one eye on their digital device, and another on the telly.
As O’Donovan put it at the close of his talk, many aspects of public cloud services (such as security, availability, reliability) have become so much better than an organisation could do for itself (with much more economical costing) that, for many use cases, it’s “easier to use than not to use”.