Streaming is available in most browsers,
and in the Developer app.
-
Explore ARKit 5
Build the next generation of augmented reality apps with ARKit 5. Explore how you can use Location Anchors in additional regions and more easily onboard people into your location-based AR experience. Learn more about Face Tracking and Motion Capture. And discover best practices for placing your AR content in the real world. We'll also show you how you can integrate App Clip Codes into your AR app for easy discovery and precise positioning of your virtual content.
Resources
- ARKit
- Explore the ARKit Developer Forums
- Human Interface Guidelines: App Clip Codes
- Interacting with App Clip Codes in AR
- Tracking Geographic Locations in AR
Related Videos
WWDC22
WWDC21
WWDC20
WWDC19
-
Download
Hello. I’m David, an engineer from the ARKit team.
Today Christopher and I will be sharing a broad range of improvements to ARKit 5. We’re excited to discuss the changes coming to iOS 15.
This year, we’ve made many upgrades across the board, and we’ll be discussing multiple features. Before we do that, we want to showcase the experiences you all have been building with LiDAR.
We’ve seen a variety of LiDAR-enabled apps using the scene reconstruction and depth APIs: productivity, photo filter effects, entertainment, and even games that you can play in your living room. We’re really happy to see the creativity and resourcefulness shown by the ARKit community. While you’re creating these apps, we’re working hard on bringing you the world’s best AR framework and pushing the boundaries of what’s possible. Let’s go over the changes coming in ARKit 5. First, we’ll share some updates and best practices for location anchors, which enable AR experiences in real-world outdoor locations. Next we’ll cover App Clip Codes, which are a great way to discover app clips and also allow you to position your content in AR. We’ll highlight some improvements to face tracking using the ultra-wide front-facing camera on the new iPad Pro. And we’ll finish with some enhancements to ARKit motion capture. We’ll begin with location anchors, where we’ve worked to expand region support and provide some quality of life improvements.
We’ll also recommend some best practices for creating applications.
Location anchors were introduced last year to allow placement of AR content at a specific latitude and longitude. Their purpose is to allow creation of AR experiences tied to geographic locations.
Let’s take a look at an example. This is the New Nature experience from the ScavengAR application, built using the location anchors API. ScavengAR hosts AR content at real-world locations and enables the creation of virtual public art installations and activities. It’s a good example of how location anchors can power outdoor experiences as the world reopens. The Maps app is also introducing a new AR feature that uses the API in iOS 15. Let’s take a look. This year Maps is adding turn-by-turn walking directions shown in AR, using the location anchors API.
They incorporate several practices we recommend. We’ll cover these later on to show how you can build great applications.
Now that we’ve seen a few samples, let’s recap how location anchors can be used to create them, starting with the steps required to set up a GeoTrackingConfiguration. First, verify that the feature is supported on the device. Location anchors require an A12 chip or newer and cellular and GPS support. Next, check that the feature is available at the location before launching. Camera and location permissions must be approved by the device owner.
ARKit will prompt for permissions if needed.
Last year’s presentation introducing ARKit 4 and the sample project, “Tracking Geographic Locations in AR,” cover all these topics and API usage in greater depth.
We highly recommend familiarizing yourself with both of these sources.
This code sample shows how to perform the checks from the previous slide. It queries for device support and then verifies if the feature is available at the current location before attempting to run a GeoTrackingConfiguration. GeoAnchors can then be added to the ARSession like other types of anchors. They’re specified with latitude-longitude coordinates and, optionally, altitude. It’s important to monitor the GeoTrackingConfiguration’s status to see if the feature has localized and what issues may remain to be resolved.
The developer sample contains an example of how to implement a method to receive status updates.
Checking availability near the device location is important for starting an application with geo tracking. We’re constantly working to support more regions. Location anchors were limited to five metro areas for their initial release and, since then, support has expanded to more than 25 cities across the U.S. We’re also working hard to bring location anchors to cities around the globe. For the first time, we’re excited to announce a market outside the United States.
Location anchors are coming to London.
We’ll continue working to add new regions over time.
If you don’t live in a supported metro area, you can also begin to experiment with location anchors through the use of recording and replay, which we’ll cover later on in this session.
For the list of supported regions, refer to the online documentation for ARGeoTrackingConfiguration at any time.
As location anchors become available in more regions, we recognize the need to have a common visual language to guide people. To assist with a consistent onboarding process, we’re adding a new .geoTracking goal to use with the ARCoachingOverlayView.
Similar to the existing overlay for world tracking, it displays an animation to help people achieve a good experience.
Since coaching overlays are used across many different AR apps, including Maps, people will already have some familiarity with them and know how to respond. We encourage you to include the coaching overlay to ease the learning curve for this feature.
Even while using the coaching overlays, it’s still recommended to monitor the .geoTracking status updates, which contain more detailed information on tracking state.
Here’s what the .geoTracking coaching overlay looks like. The UI shows an instruction to point the device away from the ground and then towards building facades.
After a few seconds, tracking succeeds, and your app can place geo-tracked content. The code for displaying this animation is very similar to that used for other coaching overlays. What’s unique is the introduction of the .geoTracking goal for the overlay.
Make sure to set this goal to display the correct guide. We’ve seen how the coaching overlay can create a uniform onboarding process. Now we’ll go over some other best practices that will help you create geo-tracked AR experiences. Our first recommendation is to use recording and replay for faster development.
ARKit sessions can be recorded on devices using Reality Composer, which is available on the App Store. This is especially useful for location anchors so you don’t have to go outside as often to test. It also allows collaboration with remotely located creators. The recordings can be replayed on a device using Xcode.
To avoid incompatibility issues, it’s recommended to use the same device and iOS version.
This also works for other types of ARKit applications. Replay is not specific to location anchors.
Let’s walk through the process for capturing a recording.
To record, open Reality Composer and tap for more options in the upper right. Then open the Developer pane and select Record AR Session.
Make sure location services are enabled. Tap the red button to start and stop the recording. To replay the recording, connect the device to a computer running Xcode. Click Edit Scheme and set the ARKit Replay data option for the run configuration.
Then run the application. While recording and replay can help speed up development, there are other practices we recommend for content placement. Here’s a video demonstrating these. Notice how the AR content is large and clearly visible, and information is conveyed without needing to be overlaid with a structure in the environment. As a trade-off between development time and placement precision, consider creating content that floats in the air rather than trying to closely overlap real-world objects. We have a few other recommendations for placing content.
To obtain latitude and longitude coordinates to place objects, use the Apple Maps app and copy coordinates with at least six digits of precision. The steps for this were shown in the video introducing ARKit 4, so please refer there for more details. When creating an application, it’s also important to adjust the altitude of the content relative to the location anchor as needed to produce a good experience. If the app requires more precise content placement, add the geo anchor when the device is within 50 meters of its location.
If ARKit places the anchor with precise altitude, it will update the anchor’s altitude source field to indicate this. The CLLocation class has a method that can be used to compute the distances in meters between two points. This can be used to verify that someone is close to a location before adding an anchor. This concludes our session on location anchors. There are more ways to place AR content in your apps using ARKit 5. So let me hand it off to Christopher, who will tell you more. Thank you, David. Hi, my name is Christopher, and I’m an engineer on the ARKit team.
I'm excited to tell you more about the other great new features in ARKit 5. Let me start with App Clip Codes in ARKit. You probably remember that we introduced App Clips at WWDC last year. An app clip is a small slice of an app which takes people through one contextual workflow of your app without having to install the whole app. Owing to its small file size, an app clip saves download time and instantly takes people directly to a specific part of the app that’s highly relevant to their context at the moment. We also introduced App Clip Codes, which are a great way for people to visually discover and launch your app clips. No trips to the App Store necessary. This is what App Clip Codes look like.
They can come in a variety of shapes and colors. As the developer, you can create a look which works best for your scenario. You also decide what data to encode in the App Clip Code and which app clip is associated with which code. All App Clip Codes contain a visual scannable pattern and some, like the red, blue and orange codes shown here, also contain an NFC tag for the user’s convenience. People can scan the code with their camera or hold the phone to the embedded NFC tag to launch your associated app clip. And now, you can also recognize and track App Clip Codes in your AR experiences.
We’ll take a look at how that’s done later in this session. But first, let’s take a look at this app clip developed by Primer, where they use an App Clip Code to launch an AR experience. Primer partnered with Cle Tile to show people what their samples will look like in AR with the help of App Clip Codes. Simply place your iPhone and iPad over the App Clip Code to invoke an AR experience. Now people can preview the tile swatch on their wall, all without downloading an app.
That’s pretty cool, right? So, starting with iOS and iPad 14.3, you can detect and track App Clip Codes in AR experiences.
Note that App Clip Code tracking requires devices with an A12 Bionic processor or later, like the iPhone XS. Let’s take a closer look at how to use App Clip Codes in ARKit. In iOS 14.3, we introduced a new type of ARAnchor, an ARAppClipCodeAnchor. This anchor has three new properties: the URL embedded in the App Clip Code, a URL decoding state, and the radius of the App Clip Code in meters. Let me explain. Each App Clip Code contains a URL that is decoded to display the correct content. Decoding the URL is not instant. ARKit can detect the presence of an App Clip Code quickly.
But it can take a little bit longer for ARKit to decode the URL, depending on the user’s distance to the code and other factors like lighting.
This is why the App Clip Code anchor contains a .decoding state property, and it can be in one of three states.
The initial state .decoding indicates that ARKit is still decoding the URL. As soon as ARKit has successfully decoded the URL, the state will then switch to .decoded. When decoding the URL is not possible, the state will switch to .failed instead.
This can, for example, occur when someone scans an App Clip Code which is not associated with the app clip.
To use App Clip Code tracking, you should first check if it is supported on the device. Remember the App Clip Code tracking is only supported on devices with an A12 Bionic processor or later.
Then set the appClipCodeTrackingEnabled property on your configuration to true and run the session.
To read the URL of an App Clip Code, monitor the AR sessions did update Anchors callback and check the decoding state of any detected App Clip Code anchors.
While ARKit is decoding the App Clip Code, you might want to display a placeholder visualization on top of the App Clip Code to give the user instant feedback that the App Clip Code was detected but still needs to be decoded. As mentioned before, decoding App Clip Codes can also fail. For example, when someone points the phone at the App Clip Code which does not belong to your app clip. We recommend that you also give feedback in that case. Once the App Clip Code has been decoded, you can finally access its URL and start displaying the right content for this App Clip Code.
For example, in case of the Primer app clip which you saw earlier, the URL contains information about which tile swatch to display. Once an App Clip Code has been decoded, the question is, where should you display the content associated with this code? One option is to display it directly on top of the App Clip Code anchor.
However, depending on your use case, the App Clip Code itself might not be the best place to display the content.
So, for example, you could position the content nearby the App Clip Code with a fixed relative position.
This works well when the App Clip Code is printed on an object, say, a coffeemaker, and you want to display the virtual instructions on how to operate it on top of the machine’s buttons.
Or you could combine the App Clip Code tracking with other tracking technologies supported by ARKit. For example, image tracking. Let’s take a look at an implementation of that. The videos and code which you see next are based on the “Interacting with App Clip Codes in AR” sample code which you can download on developer.apple.com. What you see now is a recording of the sample’s AR experience. First, I’m starting in the Camera app, scanning a sunflower seed package. Maybe I’m shopping in the gardening store, trying to decide what plant seeds to buy. iOS recognizes the App Clip Code on the package and launches the associated Seed Shop app clip. Here, I’m scanning the App Clip Code a second time, and then the grown sunflower appears on the seed package. Note that the app clip uses image tracking of the entire seed package and places the sunflower on it. This approach makes sense in this use case, as the person’s attention is most likely on the entire seed package and not on the smaller App Clip Code in the top right.
But what if someone wanted to see the plant grow in their garden? Here is what that could look like. Here we see that when the code is scanned for the first time, it invokes an app clip download.
Then when the same code is scanned again from within the app clip, it associates the code with a sunflower seed box and then tapping on the lawn makes a sunflower appear there.
If instead, the app clip saw the code on the rose seed box, it would have spawned a rose plant on the lawn.
Note that app clips are supposed to contain only one workflow.
But the app clip can offer a button to download the full Seed Shop app to experience other plants they could preview in their space. Remember, App Clip Code tracking also works in App Clip’s parent app. Let’s take a look at the code which we need to place sunflowers on the lawn. First, you add a tapGestureRecognizer to the view to detect taps on the screen. When the person taps on the screen you can cast a ray into the world and get back a resulting location on the horizontal plane in front of their device. In our scenario, this would be the person’s lawn.
You then grab the last App Clip Code URL that was decoded and add a new ARAnchor on the lawn.
Lastly, you download the sunflower 3D model and display it on the lawn.
Now, let’s talk about some best practices for App Clip Codes in ARKit. App clips can be used in different environments and for different use cases. Consider whether it’s an option for you to create NFC App Clip Codes.
We recommend NFC App Clip Codes for environments where people can physically access the code. When using an NFC App Clip Code, use appropriate call to action text that guides people to tap onto the tag or, alternatively, offers an explicit affordance to scan the code.
Last but not least, you need to make sure that your App Clip Codes are printed on the appropriate size for the user’s environment. For example, a restaurant menu might be printed on A4 paper, and people will be comfortable scanning a 2.5-centimeter App Clip Code on that menu from a distance of up to 50 centimeters. A movie poster, however, is usually much larger and might have enough space for a 12-centimeter App Clip Code which people would be able to scan with their phone from up to 2.5 meters away.
Please check out our Human Interface Guidelines on App Clip Codes for more information on recommended code sizes.
So that’s how you use App Clip Codes in ARKit. If you want to dive deeper into app clips and App Clip Codes, be sure to check out “What’s new in App Clips” and “Build light and fast App Clips” sessions. Now let’s jump over to face tracking.
Face tracking allows you to detect faces in the front-facing camera, overlay virtual content, and animate facial expressions in real time.
Since the launch of iPhone X, ARKit has seen a ton of great apps that take advantage of face tracking. From tracking multiple faces to running face tracking in simultaneous front and back camera use case, this API has received a number of advancements over the years.
Last year, we introduced face tracking on devices without a TrueDepth sensor, as long as they have an A12 Bionic processor or later.
And earlier this year, we launched the new iPad Pro that provides you with an ultra wide field of view front-facing camera for your AR face tracking experiences. Let’s take a look. Here you see the regular front-facing camera’s field of view. And this is the new ultra-wide field of view on the new iPad Pro. It really makes a difference, doesn’t it? Be aware that your existing apps will keep using the normal camera for face tracking. If you want to upgrade your user’s experience to the ultra-wide field of view on the new iPad Pro, you have to check which video formats are available and opt-in for the new ultra-wide format. You can do this by iterating over all supported video formats and checking for the builtInUltraWideCamera option. You then set this format on your AR configuration and run the session. One thing to note is that the new iPad Pro’s ultra-wide camera has a much larger field of view than the TrueDepth sensor.
Therefore you will not get a capturedDepthData buffer on the ARFrame when using the ultra-wide video format.
Last but not least, let’s talk about motion capture. Since its launch in 2019, motion capture has enabled robust integration of real people in AR scenes, such as animating virtual characters along with being used in 2D and 3D simulation.
In iOS 15, motion capture is getting even better. On devices with an Apple A14 Bionic processor like the iPhone 12, motion capture now supports a wider range of body poses. And this requires no code changes at all. All motion capture apps on iOS 15 will benefit from this.
Most notably, rotations are more accurate than ever, helping you track sports actions with much more precision. Another big improvement is that your device camera can now track body joints from a much further distance.
Also there has been a significant increase in tracking the range of limb movement. Let’s take a look at an example. Here is one of my coworkers, Ejler, tracking his workouts with the app Driven2win. The results on iOS 15 are more precise than ever. To recap, ARKit 5 brings lots of new features and improvements.
Location anchors are available in new cities and feature a new coaching overlay.
App Clip Code tracking assists in the easy discovery and use of AR in your app clip, as well as precise positioning of your virtual content. Face tracking works with the new ultra-wide field of view on the new iPad Pro, and motion capture adds better accuracy and larger range of motion. I’m so excited to see all the amazing experiences you will create with ARKit 5.
[music]
-
-
3:29 - Geo Tracking Recap I
// Check device support for geo-tracking guard ARGeoTrackingConfiguration.isSupported else { // Geo-tracking not supported on this device return } // Check current location is supported for geo-tracking ARGeoTrackingConfiguration.checkAvailability { (available, error) in guard available else { // Geo-tracking is not available at this location return } // Run ARSession let arView = ARView() arView.session.run(ARGeoTrackingConfiguration()) }
-
3:42 - Geo Tracking Recap II
// Create Location Anchor and add to session let coordinate = CLLocationCoordinate2D(latitude: 37.795313, longitude: -122.393792) let geoAnchor = ARGeoAnchor(name: “Ferry Building”, coordinate: coordinate) arView.session.add(anchor: geoAnchor) // Monitor geo-tracking status updates func session(_ session: ARSession, didChange geoTrackingStatus: ARGeoTrackingStatus) { … }
-
6:02 - Geo Tracking Coaching Overlay
// Declare coaching view let coachingOverlay = ARCoachingOverlayView() // Set up coaching view (assuming ARView already exists) coachingOverlay.session = self.arView.session coachingOverlay.delegate = self coachingOverlay.goal = .geoTracking coachingOverlay.translatesAutoresizingMaskIntoConstraints = false self.arView.addSubview(coachingOverlay) NSLayoutConstraint.activate([ coachingOverlay.centerXAnchor.constraint(equalTo: view.centerXAnchor), coachingOverlay.centerYAnchor.constraint(equalTo: view.centerYAnchor), coachingOverlay.widthAnchor.constraint(equalTo: view.widthAnchor), coachingOverlay.heightAnchor.constraint(equalTo: view.heightAnchor), ])
-
8:53 - GeoTracking Distance Method
// Method to compute distance (in meters) between points func distance(from location: CLLocation) -> CLLocationDistance
-
12:16 - App Clip Code: check device support
func viewDidLoad() { // Check device support for app clip code tracking guard ARWorldTrackingConfiguration.supportsAppClipCodeTracking else { return } let worldConfig = ARWorldTrackingConfiguration() worldConfig.appClipCodeTrackingEnabled = true arSession.run(worldConfig) }
-
12:34 - Accessing the URL of an App Clip Code
/// Accessing the URL of an App Clip Code override func session(_ session: ARSession, didUpdate anchors: [ARAnchor]) { for anchor in anchors { guard let appClipCodeAnchor = anchor as? ARAppClipCodeAnchor, appClipCodeAnchor.isTracked else { return } switch appClipCodeAnchor.urlDecodingState { case .decoding: displayPlaceholderVisualizationOnTopOf(anchor: appClipCodeAnchor) case .failed: displayNoURLErrorMessageOnTopOf(anchor: appClipCodeAnchor) case .decoded: let url = appClipCodeAnchor.url! let anchorEntity = AnchorEntity(anchor: appClipCodeAnchor) arView.scene.addAnchor(anchorEntity) let visualization = AppClipCodeVisualization(url: url, radius: appClipCodeAnchor.radius) anchorEntity.addChild(visualization) } } }
-
15:34 - Adding a gesture recognizer
/// Adding a gesture recognizer for user interaction func viewDidLoad() { initializeARView() initializeCoachingOverlays() // Place sunflower on the ground when the user taps the screen let tapGestureRecognizer = UITapGestureRecognizer( target: self, action: #selector(handleTap(recognizer:))) arView.addGestureRecognizer(tapGestureRecognizer) }
-
15:45 - Tap to place the sunflower
func handleTap(recognizer: UITapGestureRecognizer) { let location = recognizer.location(in: arView) // Attempt to find a 3D location on a horizontal // surface underneath the user's touch location. let results = arView.raycast( from: location, allowing: .estimatedPlane, alignment: .horizontal) guard let firstResult = results.first else { return } // Fetch the last decoded app clip code URL guard let appClipCodeURL = decodedURLs.last else { return } // Add an ARAnchor & AnchorEntity at the touch location let anchor = ARAnchor(transform: firstResult.worldTransform) arView.session.add(anchor: anchor) let anchorEntity = AnchorEntity(anchor: anchor) arView.scene.addAnchor(anchorEntity) // Download the 3D model associated with this app clip code. downloadAndDisplay(appClipCodeURL, on: anchorEntity) }
-
18:33 - Checking for supported video formats for face tracking
// Check if the ultra wide video format is available. // If so, set it on a face tracking configuration & run the session with that. let config = ARFaceTrackingConfiguration() for videoFormat in ARFaceTrackingConfiguration.supportedVideoFormats { if videoFormat.captureDeviceType == .builtInUltraWideCamera { config.videoFormat = videoFormat break } } session.run(config)
-
-
Looking for something specific? Enter a topic above and jump straight to the good stuff.
An error occurred when submitting your query. Please check your Internet connection and try again.