Screen Capture in iOS

This post is also available in: Russian

In a mobile application, Screen Capture is a powerful auxiliary tool to be used in a variety of tasks. In this article, we are going to look at the mechanisms used to implement this feature.

Making screen shots or screen capture, you can create intuitive user guides to train users how to run a variety of complex applications. Animated tutorials demonstrating exactly how to use the functionality are much more efficient compared to textual tutorials. Application support and bug fixing can be substantially simplified if, along with the error message, users can send a record of their actions that have lead to a given failure. Also, many gamers often use screen capture to demonstrate their achievements or present game hacks.

It is no doubt that, Screen Capture in mobile applications is very helpful and can be potentially applicable to simplifying many tasks. So it would be naive to believe that Apple failed to include this capability in iOS. Indeed, there is such a possibility. In this post, we’ll try to cover the whole range of Screen Capture features on iOS, as well as discuss some subtleties and nuances of such a common action as taking screenshots from a video player.

Enable and Disable UIGetScreenImage

The easiest and surest way to get the contents of the iPhone screen has always been by using UIGetScreenImage, which does exactly what is required of it: returns a reference to an image showing content of your phone screen at the time of the call.

However, this function has just a single downside. If not this, it would be the best solution for making screenshots. The downside is, that the function is part of the Private API: it means that any application that uses it cannot be published at App Store and hence get to the user device.

Still it is worth noting that at the end of 2009, Apple said in its forum that it grants permission for use of this feature in its applications. Many developers enthisiastically favored this initiative. However, after six months the same forum published a note that the UIGetScreenImage function is now back to Private API, and, consequently, all applications using it at the moment should move to similar publicly available functions. This way, the Apple development team has demoted the feature from the list of helpful ways of screen capturing to the category of auxiliary functions used only for development and testing purposes.

But it would be wrong to deprive developers of this helpful feature without giving anything in return. Therefore, in parallel with announcement of closing the feature, they provided three ways to replace it in three different cases. Let’s take a closer look.

Method one. Rendering of UIView Layer

The first item on the list is the CALayer class method to retrieve the layer content as an image. The method is

(Void) renderInContext: (CGContextRef) context.

. Using of this method for screen capture is described in Q & A QA1703 on the Apple’s website. This method simply saves the content of the layer to the current graphics context. From the context, you can directly retrieve the image that is stored there. Therefore, for any UIView you can capture images of its layer.

Below is an example of the code that can be used to make an image from UIView.

[gist id=4233212 bump=1]

It is safe to say that this method can be used in most cases simple of screen capture from applications containing simple interface consisting only of the UIKit elements. However the caveat is that, as soon as there is a GUI element like a running player or an OpenGL scene, instead of an image the capture would contain a black screen (or a screen of any other color underlying the aforesaid elements in the picture hierarchy). This occurs because such elements contain Hardware Overlay, i.e., a layer processed by CPU rather than GPU. In other words, both the OpenGL scene and the player are running in the GPU memory and since are not going not respond to a message asking to send their picture to the context.

So, the first solution proposed is the most abundant, but it is in no way universal. It will work for an interface built in UIKit, but if there are elements with GPU in the applications, the screenshot will contain blank fields.

But Apple seems to have considered this point. The following two methods of Screen Capture provide solutions to OpenGL scene capture and video camera capture.

Method Two. Obtaining a picture from OpenGL ES

As we have already noted, the content of an OpenGL scene resides in the Graphical Processor Unit (GPU) memory. Therefore, in order to get a screenshot of the window displaying a graphical scene, created using the OpenGL ES framework, a different way is expedient. This method is described in Q & A QA1704. The function is called glReadPixels.

To obtain images of the OpenGL scene, do the following.

  1. Connect to the rendering buffer needed (if more than one buffer is used) using glBindRenderbufferOES.
  2. Allocate proper memory space for the image, given that each pixel of the image occupies 4 bytes.
  3. Copy the image pixels to the selected memory space by calling glReadPixels.
  4. Declare the selection as "source data" by selecting a link to the source using the CGDataProviderCreateWithData function.
  5. Get CGImage from a data source using the CGImageCreate function.

The code to illustrate the related call sequence is shown below.

[gist id=4233229 bump=1]

With CGImage, you can easily get UIImage of the image and use it at your discretion. Please bear in mind that the coordinate systems of OpenGL ES and UIKit have different directions of the Y axis. Therefore, before using the screenshot, please turn it over.

Method Three. Image Captured from the Camera

The picture captured from the iPhone camera is also not a CPU property. To solve the issue with live video image capturing from the camera, Q & A QA1702 suggests to use the following method based on the AVFoundation framework.

It is assumed that we already have an AVCaptureSession object for the current recording and related objects such as AVCaptureDeviceInput (representing the camera) and AVCaptureVideoDataOutput (representing the output). First you need to create a delegate for the output object, which would call a captureOutput:didOutputSampleBuffer:fromConnection: delegate whenever the next frame of video is called. Note that this method receives the buffer containing the next frame as the CMSampleBufferRef object. We can use this object to create CVImageBufferRef, and its memory space can be used to create a graphics context with the same information in it, that is, with the camera picture. To do this, the CGBitmapContextCreate function is used. Then the only thing you need to do is to get the picture as UIImage from the current context. An example of this approach is shown below.

[gist id=4233239 bump=3]

This method of obtaining images from the AVFoundation media objects is almost universal: it can also be used for taking screenshots from the video player content. For this purpose, you just need to get a buffer containing the next frame. By the way, this method is used in the popular GPUImage framework to apply filters to the video. Further, we are going to get into more detail on making screenshots.

No specific nuances of using this method of screen capture have been found. Suffice it to say that Apple also described the process of making screenshots when the screen contains both UIKit elements and video shot by the camera at the same time. For instance, you may need this if you wish to capture interface elements along with your video screenshot. The method is described in Q & A QA1714, and is essentially a combination of the first and third methods. They offer first to send image captured from the camera to the graphics context and then add interface to it using renderInContext. After that, the context will host a screenshot with both the interface and the frame of the camera picture.

What Are We Going to Have In the End?

So, Apple has provided a replacement for UIGetScreenImage. But the question still remains: whether this is a full replacement? Well, not at all. Although the features presented above and their combinations can be used in almost all cases of Screen Capture, at least one option has been left uninvolved. What if we need to take a screenshot from the on-screen video shown by the player? Indeed, in this case the first method will provide blank space instead of a video frame, and the second and the third methods are too specific. For unknown reasons, Apple did not cover this topic, so we have decided to make our own research. The following section highlights how to obtain screenshots from video.

Screen Capture from Video

So, you need to capture a frame from the video you are playing in your application. None of the methods suggested above will work in this case. Video content is not part of UIKit, has little in common with OpenGL ES, and is not a video stream fed by a camera phone. What should we do in this case?

There are at least two approaches to solving this issue. Their use depends on the player type.

Only two types of players are used in iOS: MPMoviePlayer from the MediaPlayer framework and the AVPlayer framework by the AVFoundation. As we have described in our post "Playing video in iOS applications", the first player provides an easy-to-use out-of-the-box solution for playing back media files. Its downside is its inability to adjust its functional behavior and appearance. On the contrary, with the second player, you gain full control over all aspects of its operation and interface, but it is basically more complex in use.

Screen Capture from MPMoviePlayer

MPMoviePlayer provides an intuitive method for making snapshots from video. You can use it to create shortcuts to objects representing a particular video in the application interface. It is a method called

– (UIImage *)thumbnailImageAtTime:timeOption.

Its description can be found in the official Apple’s documentation. This method allows you to set the time for the desired frame, and determine what frame is a priority to you: the closest to the required time, or the exact frame located at the specified time. To get a screenshot, you just need to call this method.

Screen Capture from AVPlayer

With MPMoviePlayer everything has been pretty easy. As AVPlayer is more complex, it is logical to assume that the process of screen capturing its video content will also be more complicated. So it is. There are at least two ways to make a snapshot of this player.

The first way is to use the AVAssetImageGenerator class. This class contains a special method

– (CGImageRef) copyCGImageAtTime:actualTime:error:.

According to the documentation, the method returns a link to the image at the specified time. However, there is one subtlety here. In fact, the method will return the frame closest to the specified time; it is for this very purpose that the method receives a link to actualTime, to communicate the real time corresponding to the frame. Obviously, if you are not much concerned with accuracy of the frame taken, this method is very easy to use.

[gist id=4233335]

The second method of getting a screenshot of the player is a bit more complicated. It consists in getting the CMSampleBufferRef object from the video and further obtaining images from it. So we do not have to explain it all the way,- the second part of the method has already been described in section about on taking snapshots from the camera. To get the image buffer you can do the following.

  1. First, create an AVAsset object containing all the information about the current media content.
  2. Then, retrieve an array of tracks in the object (for more details on the AVAsset device, please refer the official documentation). Of all of the tracks, choose the one you need.
  3. For the track, create an AVAssetReaderTrackOutput object to display the track content. Then, add it to the previously created AVAssetReader object to read the content.
  4. After that, it is absolutely easy to get the buffer. You just have to call copyNextSampleBuffer (link to the class documentation) for the AVAssetReaderTrackOutput object.

Sample code to illustrate this approach is provided below.

[gist id=4249346]

As it has been noted above, this method is pretty universal and can apply to any video, either local or server based.

So we have described the basic methods of making screenshots from the video. In conclusion, we would like to tell you about a very unpleasant issue occuring on screen capture from video sent via HLS.

Issue with HTTP Live Streaming

All of the above methods perfectly perform their function. However, if you try to use them when your application plays back video it receives via HTTP Live Streaming, there is a staggering issue. You can not take a screenshot from such video. The reason behind such behavior of HLS video is given in AVFoundation Release Notes for iOS 4.3, chapter called "Enhancements for HTTP Live Streaming".

The fact is that, as the streaming video is dynamic, the real duration and set of tracks for the related file may (and will) be different from respective figures for the video shown in the player. Therefore, to avoid unpleasant consequences of this fact, the AVAsset object has been designed so that the array of tracks for the transmitted file is empty. This means that no method of taking a snapshot from the video will work, as there is no place you can take snapshots from. Moreover, here it has been stated that even if the track array is not empty, the copyNextSampleBuffer method will still return NIL for the streaming video.

All this combined explains why we have not been able to get a screenshot of video transmitted over HLS.

***

In this post, we have tried to provide more detail on Screen Capture tools available in iOS.

Still it should be noted that UIGetScreenImage which is closed today is devoid of any restrictions and allows you to capture screen image in any of the cases described in the post, including the case of HLS video. However, abandoning of this method has forced developers to seek other approaches to making screenshots. The methods presented in this paper cover almost all cases of making screenshots. The only issue observed at the moment is the inability to get screenshots for streaming video received via HTTP Live Streaming. Now we can just look forward that in the future Apple will eliminate this nuisance.

We hope that you have learned a lot of useful information from this post and found the answers to your questions relating to Screen Capture in iOS.

Leave a Reply