Core Image Tutorial in Swift

By | February 18, 2017

It allows developers to filter images in their apps.

Advantages

  • Core Image supplies 90+ filters.
  • Powerful image filtering capabilities
  • Core Image includes APIs for face detection
  • Automatic image enhancements
  • Custom effects through “chained” filters.
  • You can get all kinds of effects, such as modifying the vibrance, hue, or exposure.
  • It uses either the CPU or GPU to process the image data and is very fast — fast enough to do real-time processing of video frames!
  • Core Image filters can also be chained together to apply multiple effects to an image or video frame at once.

Basic Image Filtering

Everytime you want to apply some filter to the image, you have to do the following things.

  • Create a CIImage object. CIImage has several initialization methods, including: CIImage(contentsOfURL:), CIImage(data:), CIImage(CGImage:), CIImage(bitmapData:bytesPerRow:size:format:colorSpace:), and several others. You’ll most likely be working with CIImage(contentsOfURL:) most of the time.
  • Create a CIContext. A CIContext can be CPU or GPU based. A CIContext is relatively expensive to initialize so you reuse it rather than create it over and over. You will always need one when outputting the CIImage object.
  • Create a CIFilter. When you create the filter, you configure a number of properties on it that depend on the filter you’re using.
  • Get the filter output. The filter gives you an output image as a CIImage – you can convert this to a UIImage using the CIContext, as you’ll see below.
    1. Sample Code

      
      // Get the image from the resources, please add a sample image to your project.
      let fileURL = NSBundle.mainBundle().URLForResource("image", withExtension: "png")
       
      // Original Start Image
      let beginImage = CIImage(contentsOfURL: fileURL)
       
      // Init the filter
      let filter = CIFilter(name: "CISepiaTone")
      filter.setValue(beginImage, forKey: kCIInputImageKey)
      filter.setValue(0.5, forKey: kCIInputIntensityKey)
       
      // Create the CIContext
      let context = CIContext(options:nil)
       
      let cgimg = context.createCGImage(filter.outputImage, fromRect: filter.outputImage.extent())
       
      // Apply the filter and get the new Image.
      let newImage = UIImage(CGImage: cgimg)
      self.imageView.image = newImage
      
      

      The CISepiaTone filter takes only two values, the KCIInputImageKey (a CIImage) and the kCIInputIntensityKey, a float value between 0 and 1. So after filtering, once you have the CIImage you have to convert it back to UIImage.
      Reusing CIContext will increasing the perfomance.

      Increasing Perfomance

      The simplest and one of the most effective solutions to this problem is the device’s GPU. Every iPhone comes with a CPU and a GPU. The GPU is much better at handling complicated graphics tasks, such as image processing.

      Using Multiple Filters

      More often we may want to chain the filters.
      Lets see a sample code

      override func viewDidLoad() {
          guard let image = imageView?.image, cgimg = image.CGImage else {
              print("No Image!")
              return
          }
          
          let openGLContext = EAGLContext(API: .OpenGLES2)
          let context = CIContext(EAGLContext: openGLContext!)
          
          let coreImage = CIImage(CGImage: cgimg)
          
          let sepiaFilter = CIFilter(name: "CISepiaTone")
          sepiaFilter?.setValue(coreImage, forKey: kCIInputImageKey)
          sepiaFilter?.setValue(1, forKey: kCIInputIntensityKey)
          
          if let sepiaOutput = sepiaFilter?.valueForKey(kCIOutputImageKey) as? CIImage {
              let exposureFilter = CIFilter(name: "CIExposureAdjust")
              exposureFilter?.setValue(sepiaOutput, forKey: kCIInputImageKey)
              exposureFilter?.setValue(1, forKey: kCIInputEVKey)
              
              if let exposureOutput = exposureFilter?.valueForKey(kCIOutputImageKey) as? CIImage {
                  let output = context.createCGImage(exposureOutput, fromRect: exposureOutput.extent)
                  let result = UIImage(CGImage: output)
                  imageView?.image = result
              }
          }
      }
      

      Thats al about CoreImage.
      Thanks.

    Leave a Reply

    Your email address will not be published. Required fields are marked *