Streaming is available in most browsers,
and in the Developer app.
-
Capture high-quality photos using video formats
Your app can take full advantage of the powerful camera systems on iPhone by using the AVCapture APIs. Learn how to choose the most appropriate photo or video formats for your use cases while balancing the trade-offs between photo quality and delivery speed. Discover some powerful new algorithms which can help you deliver greatly improved photo quality when you use video formats in your app. To learn more about improvements in AVCapture, be sure to also see the "What's new in camera capture" video.
Resources
Related Videos
WWDC22
WWDC21
WWDC16
-
Download
♪ ♪ Hi, my name is Roy. I'm an engineer on the Camera Software team. Today I will be walking you through some exciting photo quality improvements we made with our most popular video formats, and how your applications can make use of them to deliver an even better experience. iPhone is the most popular camera in the world, and for many years, developers have been taking advantage of its powerful camera systems to provide a diverse set of world-class experiences, from professional photography apps to video streaming tools. Different scenarios call for different levels of photo quality. For example, apps dedicated to taking still photos will demand the absolute best quality that the cameras can provide. A social app, on the other hand, might need to apply face effect overlays on top of the video frames being streamed. And this custom rendering might be computationally expensive. In order to avoid frame drops, the developer might prefer lower resolution frames so there are fewer pixels to process per frame. This diversity in use cases calls for an easy way to specify where you want to land on the scale of quality versus performance. Before we dive into photo quality, however, let's have a brief refresher on how photos are taken on iOS in general. We will start with an AVCaptureSession object, around which we can build our object graph. Since we are taking photos, we will use a camera as our AVCaptureDevice. Then, an AVCaptureDeviceInput is instantiated based on that device, and it will provide input data to the session.
An AVCapturePhotoOutput will then be added to the graph as the recipients of the photos.
And all these elements are then connected together using an AVCaptureConnection.
After the session started running, we can capture photos by calling the capturePhoto method on the AVCapturePhotoOutput instance. Further customization can be done using the AVCapturePhotoSettings object passed to the capturePhoto method. The captured photo will be represented as an AVCapturePhoto object that you will receive in your delegate method. We had a very detailed discussion on these APIs in the 2016 session, Advances in iOS Photography. Please check it out if you haven't already. Now that we know how to take photos on iOS in general, let's see how high quality photos can be taken. Historically, if you wanted to capture photos of the best possible quality, you would set the isAutoStillImageStabilization Enabled property on your AVCapturePhotoSettings to true, and that's because still image stabilization was the main method for getting higher quality photos. But over the years, we have been continuously evolving our photo quality-enhancing algorithms. In addition to still image stabilization, we now have a far richer set of techniques to draw from, such as a variety of multi-image fusion technologies, including Smart HDR and Deep Fusion. Consequently, the name isAutoStillImageStabilization Enabled has become quite obsolete as a proxy for high photo quality.
To solve this problem, in iOS 13, we introduced AVCapturePhotoOutput.QualityPrio ritization. It's a very easy way to tell AVCapturePhotoOutput how to prioritize quality in your photo captures. We haven't had a chance to talk about this important API at previous WWDCs, so let's now take a moment to see how it works.
There are three quality prioritization levels to choose from: speed, balanced, and quality. With speed, you are telling the framework that the speed of the capture is what you care about the most, even at the expense of photo quality. If a balance needs to be struck between photo quality and delivery speed, balanced should be used. Quality does the opposite of speed. It says image quality should be prioritized first and foremost, while the potential slowness of the capture process can be tolerated. Please note that the quality prioritization specified only serves as a hint to the AVCapturePhotoOutput, and it doesn't dictate what algorithms to use.
Ultimately, the AVCapturePhotoOutput will consider a variety of constraints and choose the most appropriate algorithm for the current scene. For instance, it might choose a different method for a low-light situation than in a well-lit space.
That being said, we understand that based on different capture durations, you might want to plan your user experience differently.
So on the AVCaptureResolvedPhotoSettings object passed to some of the AVCapturePhotoOutputDelegate methods, we give you a property called photoProcessingTimeRange that indicates how long it will take to deliver the photo to your delegate.
This, for instance, can help you decide whether you want to put out a spinner if the capture will take a while. Let's see how it works in code. When you are setting up your AVCapturePhotoOutput, you can specify a max quality prioritization that the particular capture session will require. If you choose not to do so, the default value is balanced. This only has to be set once. The importance of doing so is that depending on different settings, we will configure our capture pipelines differently. For instance, if we know you will not go beyond speed prioritization, we can construct a capture pipeline that consumes much less memory and power than the one for, say, balanced prioritization. So we encourage you to choose responsibly and only take what you need.
Before you call the capturePhoto method, you can customize the quality prioritization for this particular capture by setting the photoQualityPrioritization property on your AVCapturePhotoSettings object. The default value is balanced. As demonstrated here, we are using two different levels in two different situations.
Please note that the per-capture quality prioritization cannot go beyond your AVCapturePhotoOutput's max quality prioritization, otherwise you will get an exception. The performance characteristics of the three levels are determined by the underlying algorithms we use. The mappings differ based on the kinds of format you use, and we will talk some more on the difference between photo and video formats momentarily. With photo formats, Speed will get you WYSIWYG photos-- that is What You See is What You Get photos-- which are lightly processed only with some noise reduction applied. If Balanced is specified, we will choose from a collection of fast fusion algorithms that produce much better photo quality than WYSIWYG photos at a somewhat slower capture rate. For Quality, depending on the current device and lux level, the framework will use some heavy machineries such as Deep Fusion in order to provide the best possible photo quality. The photos will look great, but there is no free lunch. You pay for it by using more time.
For video formats, on the other hand, all levels will use the lightest processing to deliver photos as fast as possible. We have been talking about photo and video formats for a while now. Let's take a closer look at the differences between them. By using photo formats, you are signaling to the framework that taking still photos is what you care about the most. For example, if you are using an AVCaptureVideoDataOutput with a photo format, the sample buffers you get by default will only be of preview resolutions, and that's because knowing that taking photos is your top priority, we can assume that these frames will be used for preview rather than video recordings. A good reason to choose photo formats is that some photo-centric features are exclusive for photo formats, such as Live Photo and ProRAW, et cetera. If that's something you want to do, then photo format is the way to go. Photo formats come with the highest resolutions available, but the frame rates are limited to 30 frames per second.
To choose photo formats, you can set your session preset to photo. Or you can pick a format where isHighestPhotoQualitySupported is true. The usage of video formats, on the other hand, indicates that the experiences will now center around videos. You will get resolutions more suitable for recording and streaming, and you will be able to use high frame rates such as 60 fps. If a format is not a photo format, then it's considered a video format. So you can select one by using a non-photo session preset or choosing a format where isHighestPhotoQualitySupported is false. You might be wondering why we are not applying some of the powerful algorithms to video formats. It's not because we're lazy. We have good reasons for that. Many apps choose to use video formats because they need to do heavy custom processing, and video formats are well-suited for this purpose because of their low overhead. If we leverage some of the aforementioned photo enhancing techniques, we might introduce degradations in these apps' experiences. For instance, an AR app might allow users to snap a photo of the 3D scene that they are interacting with. Running the existing fusion algorithms at this point is likely to introduce frame drops in the app's camera feed, interrupting its core feature. So we have been very conscious of this delicate balance between quality and speed, and we designed our video formats to work responsively even under the most demanding conditions. But those compromises stop today with iOS 15. We are taking a major leap in photo quality with our most popular video formats. With some improved algorithms, we are now able to radically improve photo quality without impacting other aspects of your apps' experiences. With this new feature, your apps can now take amazing photos, while retaining the same flexibility to perform sophisticated custom computation.
So how big of a quality leap are we talking about here? Let's take a look at some before-and-after comparisons.
The improvement is quite substantial. The little boy's face on the right has much less noise, and thus looks much more natural. And we can better perceive the light coming off his hair.
The catch lights in your subjects' eyes are simply more vibrant and lively.
In this outdoor low-light situation, there is superior de-noising on her face and clothes. Lastly, the environment also looks better. The leather texture on the chair is much better preserved.
Now that you're enticed, let's take a look at the algorithm mappings for the supported video formats.
Speed will still get you the lightly processed WYSIWYG photos. They are still the fastest way of getting a photo, and since speed is now your top priority, this fits the bill perfectly, so we didn't change it. You will not be getting any frame drops in your video recordings or any disruptions in your preview feed. With Balanced, however, you are now getting a significant bump in quality, while only getting a very slight increase in the photo's processing time.
And just like in Speed, your video recording will not have any frame drops. Your preview feed will not get interrupted, even when those great-looking photos are taken and processed. Finally, for Quality, we are running more expensive algorithms to get even better qualities. This might drop frames or cause preview feed interruptions, depending on how recent your devices are. This feature will be available on all iPhones with support all the way back to iPhone XS. The video formats that are getting this upgrade are the most popular ones: 1280x720 with support for both 30 and 60 frames per second.
1920x1080, also for both 30 and 60 fps.
1920x1440, for 30 fps. And we even added support for 4K, with 30 fps.
So how can you make sure you are using the right formats in your code? It's very simple. In iOS 15, we are introducing a new property called isHighPhotoQualitySupported on the AVCaptureDevice.Format type. For formats that support this feature, this property will be true. Any format with this property being true is guaranteed to be a video format, so you don't have to worry about accidentally picking a photo format.
Let's say you want to get any such format. You just need to get the formats available on your AVCaptureDevice instance. Then just select the one with isHighPhotoQualitySupported being true. We updated our sample code AVCam to use this feature. Please check it out if you want to see a working example. It's possible to confuse the new property isHighPhotoQualitySupported with the existing isHighestPhotoQualitySupported. Like we mentioned earlier, the latter tells you whether a format is a photo format, and it doesn't tell you whether a video format supports high photo quality. Now, do you need to put in any work at all to get this new feature? The answer is maybe. If you are already using AVCapturePhotoOutput, and the .balanced prioritization, then congratulations, you will automatically get better-looking photos on iOS 15.
If your app is using speed prioritization, by simply updating it to balanced, you will receive better photos without having to worry about any frame drops.
If you're still using the deprecated AVCaptureStillImageOutput, then hopefully this will give you a big incentive to switch over.
Since now using the quality prioritization might introduce frame drops in your videos, we don't want to impose that new behavior on your apps without you opting in. So we put in a link time check to make sure that if your app is using quality prioritization with a video format and was compiled prior to iOS 15, then we will automatically change it to balanced. If you indeed would like to get the best quality, all you need to do is re-compile your app with the iOS 15 SDK. There are a few caveats to be aware of.
This feature currently only works with AVCaptureSession, and not with AVCaptureMultiCamSession.
The deprecated AVCaptureStillImageOutput will not support this feature. If you are using .balanced or .quality prioritization, some of the algorithms we use might fuse several differently exposed images to improve dynamic range. Photos will have great quality, but they might look different from video being recorded at the same time. If you need your video and photo to look exactly the same, please use .speed instead. Lastly, let's summarize what we just covered.
When designing your app's experience, be aware of the decision to choose between quality and speed. Once you figured out the role photo quality will play in your use cases, use the appropriate prioritization levels to accomplish that. And with a minimal amount of work and sometimes no work at all, you will now be getting amazing photos with video formats. Thank you very much. [percussive music]
-
-
Looking for something specific? Enter a topic above and jump straight to the good stuff.
An error occurred when submitting your query. Please check your Internet connection and try again.