From the earliest days of photography people have experimented with altering images. It started with simple techniques used in the darkroom and evolved into a range of powerful tools that many of us use to "Photoshop" images. And as this technology has evolved we, as consumers of the images, have learned to accept the innocuous modifications (e.g. pimple removal from a face) as well as to easily reject the more outlandish modifications ("the following was Photoshopped").
And even in the earliest days of the motion picture film industry, the practitioners began manipulating the dynamic aspects of images. The 1902 film "A Trip to the Moon" contained many effects that amazed the silent film audience of that time. The techniques would be considered almost juvenile today (stopping/starting the camera to make things appear or disappear, running the camera backwards to distort what we consider normal entropy, etc.) but at the time, the people watching these films stopped and thought about how much they were seeing was "real". And today almost every film contains some sort of CGI and some of it is so good most people never notice it (was that castle really there?). Of course, science fiction films generally contain massive amounts of CGI, but, in this case, most of us only detect the more outlandish instances.
Ever since TV weather forecasters began using "green screens" it has become a standard tool of the industry. "Green screen" is a live (real-time) technology, but in concept, it is a fairly simple pasting together of two moving images. The viewer can usually detect the compositing because of mismatches in the lighting, parallax, relative movements, odd juxtapositions, etc.
Now it's time to get ready for another sea change in moving image manipulation. New developments in video processing software can reach into the dynamic image and change things in surprising ways. One of the more astonishing (and maybe frightening) developments is to use a live video feed of one person's face and combine it with the video of another person's face that is used as a "puppet control". While one person is speaking on camera (remember this is real time) another person/puppeteer can dynamically impose new expressions and facial movements.
The video below demonstrates this technology.
But this sort of real-time modification is also possible for modifying the "time and space" representation of physical objects. Here is how they are detecting and dynamically expanding differences in video. The examples they show clearly go beyond theatrical uses. They seem like they would be quite useful in medical and law enforcement contexts today.
Here is another example of applying this type of dynamic magnification to an inanimate object, in this case a vibrating wine glass. It doesn't take a lot of imagination to see how this technique could be applied in a vast number of domains. For art or science? Like all powerful tools it is likely to be misused too, so we should all be prepared to detect these modifications that will be coming to a video feed near you soon!
As a final thought and parting image this technique was applied to a fetus moving in the womb. I find it quite mesmerizing. My wife tells me that it actually feels more like the picture on the right. I guess I'll have to take her word for it!