Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Creating a Bulging Eyes Purikura Effect With Core Image

DZone's Guide to

Creating a Bulging Eyes Purikura Effect With Core Image

As an engaged pupil I recommend this tutorial view using Apple's Core Image. As you'll find out below the eyes have it!

· Mobile Zone
Free Resource

Discover how to focus on operators for Reactive Programming and how they are essential to react to data in your application.  Brought to you in partnership with Wakanda


One of my fellow speakers at try! SwiftMathew Gillingham, suggested I look at the "bulging" eyes effect used in Japanese Purikura photo booths. Always up for a challenge, I thought I'd spend a quick moment repurposing my cartoon eyes experiment to simulate the effect using Core Image. The final project is available in my Purikura GitHub repository.

The main change to this project is in the eyeImage(_:) method. After ensuring that the Core Image face detector has data for the left and right eye positions:

  let features = detector.featuresInImage(cameraImage).first as? CIFaceFeature         where features.hasLeftEyePosition && features.hasRightEyePosition     {     [...]

...I use a small CGPoint extension to measure the distance between the eye positions on screen:

    [...]

    let eyeDistance = features.leftEyePosition.distanceTo(features.rightEyePosition)

    [...]

I use that eye distance, divided by 1.25, as the radius for two Core Image Bump Distortion filters which are centred over the eye positions:

    [...]

 return cameraImage         .imageByApplyingFilter("CIBumpDistortion",             withInputParameters: [                 kCIInputRadiusKey: eyeDistance / 1.25,                 kCIInputScaleKey: 0.5,                 kCIInputCenterKey: features.leftEyePosition.toCIVector()])         .imageByCroppingToRect(cameraImage.extent)         .imageByApplyingFilter("CIBumpDistortion",             withInputParameters: [                 kCIInputRadiusKey: eyeDistance / 1.25,                 kCIInputScaleKey: 0.5,                 kCIInputCenterKey: features.rightEyePosition.toCIVector()])            .imageByCroppingToRect(cameraImage.extent)     }

If the detector finds no eye positions, eyeImage(_:) simply returns the original image.

This project has been tested on my iPad Pro and iPhone 6s and works great on both!

If you'd like to learn more about the awesome power of Apple's Core Image, my book, Core Image For Swift, is now available through the iBook Store and, now through Gumroad

Learn how divergent branches can appear in your repository and how to better understand why they are called “branches".  Brought to you in partnership with Wakanda

Topics:
core image filters ,swift ,apple ipad ,apple iphone app

Opinions expressed by DZone contributors are their own.

The best of DZone straight to your inbox.

SEE AN EXAMPLE
Please provide a valid email address.

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.
Subscribe

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}