Last year I looked at a startup called ShredVideo, which used a smart algorithm to take a large amount of raw video footage (of a bike ride for instance), and then turned that into something suitable for YouTube or somewhere of that ilk.
It would break the footage down into a manageable yet logical montage, with music fitted to the footage to match the pace of the video at particular moments.
As a technology it’s pretty cool, especially as wearable devices and drones are allowing us to increasingly record what we do.
A team of researchers from the Georgia Institute of Technology have recently developed their own software to perform a similar job.
Their software analyzes the video footage and hunts for images with particular artistic properties. It’s looking especially at things such as the composition, the location and the color of the still to determine either its importance to the video or its artistic merit.
Once it’s identified a number of high scoring frames, these are processed into a highlights reel for the video. You can see the footage created below.
It was developed using 26 hours of raw footage and took three hours to produce.
“We can tweak the weights in our algorithm based on the user’s aesthetic preferences,” the team say. “By incorporating facial recognition, we can further adapt the system to generate highlights that include people the user cares about.”
It’s an interesting approach and with Google researchers recently revealing that they had been able to accurately detect the location of images just from looking at it, it seems likely that we will increasingly have this capability even without GPS data attached to it.
Indeed, with one half of the team behind this project currently interning at Google, it may well be a project they themselves will look to develop as an additional capability for YouTube producers the world over.