The Large Hadron Collider experiments manage tens of petabytes of data spread across hundreds of data centres. Managing and processing this volume required significant infrastructure and novel software systems, involving years of R&D and significant commissioning to prepare for the LHC First Data. The evolution of this global computing infrastructure, and the specialisations made by the experiments, have lessons relevant for many commercial "big data" users. This talk looks at the data and workflow management system of one of the LHC experiments and will draw out successes, weaknesses and interesting organisational issues that have parallels in a commercial setting. Filmed at JAX London 2013.