Streaming is available in most browsers,
and in the Developer app.
-
Get started with building apps for spatial computing
Get ready to develop apps and games for visionOS! Discover the fundamental building blocks that make up spatial computing — windows, volumes, and spaces — and find out how you can use these elements to build engaging and immersive experiences.
Resources
Related Videos
WWDC23
-
Download
♪ Mellow instrumental hip-hop ♪ ♪ Jim Tilander: Hi, I'm Jim, an engineer on the RealityKit team. Today, my colleague Christopher from the ARKit team will join me in guiding you through how to get started with building apps for spatial computing. Let's dive in! We are excited about our new platform for spatial computing. This platform is built on familiar foundations for people to use and for you to develop apps on. It opens up new and exciting possibilities to blend real and virtual content, as well as using natural input to interact with your app - and the whole system has been designed to protect people's privacy, giving you the peace of mind to focus on your app's experience. Let's talk a bit about the fundamentals to build up our vocabulary and concepts of spatial computing. After that, we will go over the different ways to get started with your app. Then, my colleague Christopher will walk us through how to build your app, diving deeper into the details of spatial computing. Now, let's take a look at some of the fundamentals. First let's cover what both familiar and new UI concepts mean in spatial computing. By default, apps launch into the Shared Space. This is where apps exist side-by-side, much like multiple apps on a Mac desktop. People remain connected to their surroundings through passthrough. Each app can have one or more windows. These are SwiftUI scenes that can be resized and reflowed like you would expect of a normal macOS window. They can contain traditional views and controls, as well as 3D content, allowing you to mix and match 2D and 3D. People can reposition a window to their liking in their current space, just as one might expect. Volumes allow an app to display 3D content in defined bounds, sharing the space with other apps. Volumes are great for showcasing 3D content, for example, a chess board. People can reposition volumes in space, and they can be viewed from different angles. Volumes are SwiftUI scenes, allowing you to do layout in familiar ways, and they use the power of RealityKit to display your 3D content. Sometimes you might want to have more control of the level of immersion in your app… maybe to focus while watching a video or to play a game. You can do this by opening a dedicated Full Space, where your app's windows, volumes, and 3D objects are the only ones appearing across the view. In a Full Space, you can also take advantage of ARKit's APIs. For example, in addition to system-provided gestures, you can get more detailed Skeletal Hand Tracking to really incorporate the structure of people's hands into your experience. Your app can use a Full Space in different ways. You can use passthrough to ground content in the real world and keep people connected with their surroundings. And when you play Spatial Audio and render 3D through RealityKit you will automatically take advantage of the fact that the device will continually update understanding of the room to blend visuals and sound into people's surroundings, making them feel that these virtual objects really belong in their room. You can also choose to render to a fully-immersive space to fill up the entire field of view. This allows your app flexibility to deliver on creative intent of your app by customizing the lighting of virtual objects, as well as the ability to choose audio characteristics. These are the foundational elements of spatial computing: windows, volumes, and spaces. They give you a flexible toolset to build apps that can span the continuum of immersion. Christopher will talk more about this later. Now that we've introduced the foundational elements of spatial computing, let's explore the ways we can interact with windows, volumes, and spaces. On this platform, we can interact with apps by simply using our eyes and hands. People can, for example, interact with a button by looking at it and tapping their fingers together to select. People can also reach out and physically touch the same button in 3D space. For both these kinds of interactions, there is a variety of gestures that are possible, like taps, long presses, drags, rotations, zooms, and a lot more. The system detects these automatically and generates touch events for your app to respond to. Gestures are integrated well with SwiftUI. The same gesture API works seamlessly with RealityKit entities. This allows people to easily interact directly with your 3D scene elements. For example, this could be useful to place a flag directly onto this 3D model, or imagine controlling a virtual zipper or perhaps you want to interact and pick up virtual chess pieces. Now if you want to do a game of bowling or transform people's hands into a virtual club, you can do this through ARKit's Skeletal Hand Tracking. Here we can see an example how you can stack cubes on a table using taps and then smashing them onto the table with your hands. This is a powerful way that you can bring app-specific hands input into the experience. And finally, the system automatically brings input from wireless keyboards, trackpads, and accessibility hardware right into your app, and the Game Controller framework lets you add support for wireless game controllers as well. Collaborating and exploring things together is a fundamental part of spatial computing. We do this through SharePlay and the Group Activities framework. On this platform, as on macOS, people can share any window, like this Quick Look experience. When people share a Quick Look 3D model, we sync the orientation, scale and animations between participants, making it easy to collaborate while being in different locations. When people are collaborating on something that is shown in their space and that they physically point at, it is important that everyone in the SharePlay session have the same experience. This enables natural references such as gesturing to an object and reinforces the feeling of being physically together. We've added the concept of shared context to the system. The system manages this shared context for you helping make sure that participants in a SharePlay session can all experience content in the same way. You can use Spatial Persona Templates to further customize how people experience your content. To learn more, watch our sessions about designing and building spatial SharePlay experiences for this platform. Given that the device has a lot of intimate knowledge of the surroundings and people, we put a lot of architecture in place to protect people's privacy. Let's dive into that. Privacy is a core principle for guiding the design of this platform, while making it easy for you as a developer to leverage APIs to take advantage of the many capabilities of the device. Instead of allowing apps to access data from the sensors directly, the system does that for you and provides apps with events and visual cues. For example, the system knows the eye position and gestures of somebody's hands in 3D space and delivers that as touch events. Also, the system will render a hover effect on a view when it is the focus of attention but does not communicate to the app where the person is looking. For many situations, the system-provided behaviors are sufficient for your app to respond to interactions. In cases where you actually do need access to more sensitive information, the system will ask the people for their permission first. An example would be asking user permission to access scene understanding to detect walls and furniture or access to Skeletal Hand Tracking to bring custom interactions into your app. Now that we've seen some of the capabilities available for apps, let's move onto exploring how we are developing those apps. Everything starts with Xcode, Apple's integrated development environment. Xcode offers a complete set of tools for developing apps, including project management support, visual editors for your UI, debugging tools, a Simulator, and much more. And most importantly, Xcode also comes with the platform SDK, which provides the complete set of frameworks and APIs you'll use for developing your app. If your source file contains a SwiftUI preview provider, the preview canvas will automatically open up in Xcode. The preview canvas has been extended to support 3D, allowing you to visualize RealityKit code for your scene, including animations and custom code. This enables shorter iteration times, finding the right look and feel for your app as you edit live code and see the results of changes and tweaks directly. Let's experiment a little bit here with how the satellite looks orbiting the Earth by changing the orbital speed and the size of the satellite. Notice the preview reflects the code changes, making it easy to see the results of quick experimentation in the code. Xcode Previews also has an object mode that allows for quick previews of 3D layouts - for example, seeing if your layout fits inside the bounds of the view. This is great for building tightly-integrated scenes with both traditional UI and new 3D visuals. Xcode Preview gives you a fantastic way to get the layout right before you run your app. The Simulator is a great way of testing interactivity with your app. You can move and look around in the scene using a keyboard, mouse or compatible game controller. And it's easy to interact with your app by using simulated system gestures. The Simulator comes with three different simulated scenes, each with a day and night lighting. This makes it easy to see your app under different conditions. The Simulator is a great way to run and debug most apps and to quickly iterate during development with a very predictable environment. We've also extended Xcode to support a number of runtime visualizations while you are debugging to help you quickly understand and track down bugs by simply looking at the scene. Here we have plane estimation visible, including semantic meaning of those planes and the collision shapes in the scene. It's easy to toggle visualizations you would like to focus on from the debugger in Xcode. These visualizations works great both in the Simulator and in the device. When it becomes time to polish your application's performance and responsiveness, we've got familiar tools like Instruments. Instruments is a powerful performance analysis tool included with Xcode. You can use Instruments to provide you with actionable insights of your running app. And for spatial computing, Instruments 15 includes a new template, RealityKit Trace, providing even more and deeper insights into new behaviors on the platform. The RealityKit Trace template has new instruments allowing developers to understand GPU, CPU, and system power impact of their app, and identify performance hotspots. You can easily observe and understand frame bottlenecks and trace them back to vital metrics like total triangles submitted or number of RealityKit entities simulated. This lets you quickly find and address potential performance issues. For more details, check out the session "Meet RealityKit Trace." We've also introduced a new developer tool called Reality Composer Pro. It allows you to preview and prepare 3D content for your apps. Reality Composer Pro helps you get an overview of all your assets and how they fit together in your scene. A new feature that we added to RealityKit is particles, and you can use a workflow in Reality Composer Pro to author and preview them. Adding particles into your scene provides movement, life and endless possibilities. Clouds, rain and sparks are just a few effects that you can build in a short amount of time. Adding audio into your scenes and associating them with objects is a breeze. You can also spatially preview audio, which takes into account the shape and context of your entire scene. Most virtual objects will use RealityKit's physically-based material to represent a variety of real world materials. RealityKit uses sensor data to feed real-world lighting information into these materials, grounding them in people's surroundings. RealityKit also has a couple of additional standard materials available for your app to use in common scenarios. For those times when you have a very specific need, perhaps to convey a creative intent, you can author custom materials in Reality Composer Pro with the open standard MaterialX. You can do this through an easy-to-use node graph, without touching any code, and quickly preview them directly in the viewport. You can learn more about this in the session "Explore materials in Reality Composer Pro." When you're feeling good about your 3D content, you can send your scenes to your device and test your content directly. This is great for iteration times since you don't even have to build an app. To learn more, watch the session "Meet Reality Composer Pro." Another option that is available is Unity. Unity is bringing the ability for you to write apps for spatial computing with familiar workflows and without any plugins required. You can bring your existing content over to power new immersive experiences. To learn more, watch these sessions covering how to write immersive apps with Unity. Now that we understand some of the fundamental concepts and tools available to us, let's see how we can start building apps. There are two ways to get started - either you design a brand-new app from the ground up to be spatial or perhaps you have an existing app that you want to bring into this new spatial platform. Let's explore how we build a new app. Designing an application from the ground up to be spatial helps you to quickly embrace the new unique capabilities of spatial computing. To get started, you can use new app template for this platform. The app template has two new important options. First, you can choose your Initial Scene Type to be either a ‘Window’ or a ‘Volume’. This generates the initial starting code for you, and it's easy to add additional scenes later. The second option lets you add an entry point for an immersive space to your app. By default, your app will launch into the Shared Space. If you select Immersive Scene Type to ‘Space’, a second scene will be added to your app, along with an example button showing how to launch into this Full Space. And when you finish the assistant, you are presented with an initial working app in SwiftUI that shows familiar buttons mixed in with a 3D object rendered with RealityKit. To learn more, watch the session "Develop your first immersive app." We are also publishing code samples, each one of them illustrating different topics to quickly get you up to speed. Destination Video shows how to build a shared, immersive playback experience that incorporates 3D video and Spatial Audio. Happy Beam is an example of how you can create a game that leverages an Immersive Space, including custom hand gestures, to create a fun game with friends. And Hello World shows how to transition between different visual modes with a 3D globe. Christopher will talk more in detail about Hello World later. Building and designing your app from the ground up on this platform offers opportunities to easily embrace spatial computing concepts. However, some of you might have existing apps that you want to bring to spatial computing. From the start, iPad and iPhone apps look and feel great. If your app supports iPad, that variant will be preferred over iPhone though iPhone-only apps are fully supported. Let's take a look at the recipes app shown here in the Simulator. While this platform has its own darker style, iPad and iPhone apps retain a light mode style. Windows can scale to allow for ease of use, and rotations for apps are handled, allowing you to see different layouts. To find out more, watch the session "Run your iPad and iPhone apps in the Shared Space" to learn about the system's built-in behaviors, functional differences and how to test with the Simulator. However, running an existing iPad or iPhone app is just the beginning. It's easy to add a destination in your Xcode project for this platform with just a click. And after that, we can simply select our target device, recompile and run.
Once you recompile, you get native spacing, sizing and relayout. Your windows and materials will all automatically move to the platform's look and feel, ensuring legibility in any light condition, and your app can take advantage of built-in capabilities like highlighting for your custom controls. Now here’s Christopher to show us how we can evolve our apps further using the concepts we've covered so far. Thanks Jim. I'm going to walk you through how to build an application that incorporates the elements you've learned previously. Let's start with Hello World to explore some of the great functionalities you can integrate into your app. Here’s our sample in action. Upon running the app in the Simulator, Hello World launches with a window into the Shared Space, right in front of us. This is a familiar-looking window made in SwiftUI, and it contains different elements such as text, images and buttons. Using tap gestures allows the navigation within the app. Observe how our new view has embedded 3D content. SwiftUI and 3D content now work together seamlessly. Going back to our main window and selecting Planet Earth brings us to a new view. A new element appears. This is a volume. It contains a 3D model of the Earth, alongside a few UI elements. By moving the window bar, the volume's position can be adjusted anywhere in the surroundings.
Going back to our main window again and selecting View Outer Space brings up an invitation for us to enter the solar system.
From here, we can enter space, which is shown here with an immersion style of ‘full’. Our example renders Planet Earth and dims passthrough, allowing us to focus on the content with no distractions of the surroundings. Now that we have seen how this looks in action, let's break down some of the functionalities of Hello World and show you how to use these concepts in your own apps. As you've learned from Jim, there are multiple elements: windows, volumes and spaces. You can look at this as a spectrum that your app can use to flex up and down, depending on what is best for people using your app in a specific moment. You can choose to present one or several windows in the Shared Space, allowing people to be more present. They can see passthrough and have a choice to have other apps side by side. Or… you can choose to increase the immersion level by having your app take over the space entirely. Finding the most suitable elements for your app's experience in a given moment and flexing between them is an important consideration when you design your app for spatial computing. Next, Let's look further into how to use windows as part of your experience. Windows serve as a starting point for your app. They are built with SwiftUI using scenes, and they contain traditional views and controls. Windows on this platform support mixing 2D and 3D content. This means that your 3D content can be presented alongside 2D UI in a window. Windows can be resized and repositioned in space. People can arrange them as per their liking. Let's go back to our example. In Hello World, the content view holds our SwiftUI images, text and buttons, along with a call to action to get more immersive content. Creating a window is as easy as adding a WindowGroup to a scene. Inside the WindowGroup, we will display our Content View. Our Content View can add 3D content, bringing a new dimension of depth to your app. To do that, you can use the new Model3D view. Model3D is similar to an image, making it easy to load and display beautiful 3D content in your app that is rendered by RealityKit. To add Model3D to your view, we initialize Model3D by passing the name of the satellite model. With this, Model3D will find and load the model, and place it into your view hierarchy. Now this window has the satellite embedded into the view and can be seen coming out of the z-axis, adding a new dimension of depth to your app. Now that we have added a satellite, we can add interactions. Interactions are fundamentally built into the system and provided by SwiftUI. SwiftUI provides the gesture recognizers you are already familiar with on Apple platforms, such as Tap, onHover, and RotateGesture. The platform provides new gesture recognizers that are made for 3D interactions, like rotations in 3D space, taps on 3D objects and more. Let's look at the code that enables interactions with the satellite. We are going to enable a spatial tap gesture so we can grab and move the satellite around. Starting from Model3D, we can now add a gesture. Inside we add a DragGesture targeted to the satellite entity. We can then use the values passed in from the update closure to move the satellite. Lets see what that looks like. Back in our satellite view, where our satellite is rendered, note the DragGesture allows me to tap and drag the model, moving with my interactions. As we've just seen, it's easy to mix 2D and 3D content together with Model3D. These are just a few things you can do with a window. Now let's look at another type of element, volume. Lets see what a volume has to offer. Volume is an extension of a window, giving you similar functionality. A volume is a new style of window that is ideal for 3D content. They can host multiple SwiftUI views containing your 2D or 3D content. Although volumes can be used in a Full Space, they are really built for the Shared Space, therefore content must remain within the bounds of the volume. Let's look at how to add a volume to your scene. You will start by creating a new WindowGroup and setting its windowStyle to volumetric. Then, you need to give it a defaultSize with the properties width, height and depth. The units of a volume can be specified in points or meters. Let's look at this running in the Simulator. When the application is presented, the volume is placed in front of the person. This volume has the dimensions we specified, along with the platform controls: the application title bar, which displays our app name, making it easy to identify which app this volume belongs to; the window bar, enabling the volume to be positioned; and the close button, suspending the app when tapped, closing the volume. Currently, our volume renders the 3D model of the Earth, but you might want to start adding more content and different behaviors. In order to do this, you can adopt RealityView as part of your app. RealityView is a new view that can be added to your scene, allowing for any number of entities to be managed directly within SwiftUI. SwiftUI and RealityView let you easily integrate your app by connecting to SwiftUI's managed state and entity properties. This makes it easy to drive the behavior of 3D models with a source of truth from your app's data model. Conversion between coordinate spaces is easy with conversion functions provided by RealityView, and RealityView offers a way to position SwiftUI elements inside your 3D scene through attachments. Let's take a moment to look at how we can use attachments inside RealityView. The RealityView initializer that we're going to use takes three parameters: a make closure, an update closure, and an attachments ViewBuilder. The make closure allows you to create entities and attach them to the root entity. The update closure, which is called whenever the state of the view changes. And lastly, the attachments closure is where we add our SwiftUI views with a tag property that allows RealityView to translate our views into entities. Now, let's work through an example of how to use attachments with RealityView. Adding an attachment is as easy as putting your SwiftUI view inside the attachment closure of RealityView. Let's use this icon of a delicious pastry to represent a location on our 3D globe. For each attachment, you must add a tag that gives the attachment a name. I'll name this one ‘pin’. To display the attachment, I'll add it to the content of my RealityView. I'll do that in the update closure by adding it to the root entity of the scene. Here, we can see the attachment we made previously, rendering on the globe above my favorite bakery location. As we've just seen, using RealityKit unleashes powerful features such as Model3D, RealityView, attachments and so many more. These can be easily integrated into your app. This is only scratching the surface of what RealityKit can do. If you want to know more, I encourage you to go and watch "Build spatial experiences with RealityKit" and "Enhance your spatial computing app with RealityKit." Let's recap what we went through so far. A volume is a container that is ideal for 2D and 3D content. Volumes are built for the Shared Space, can coexist with windows, and are bounded to specified dimensions. Next, let's dive into our last type of element, spaces. Once your app is opening a dedicated Full Space, the system hides all other apps, leaving only your app visible. Now you can place your app's window, volumes and content anywhere around you. Thanks to ARKit and RealityKit, your virtual content can even interact with your surroundings. You could throw a virtual ball into the room and watch as it bounces off of the wall and then rolls on the floor. And with the addition of hand tracking, you can build custom gestures and interactions or place the content relative to people's hands. Many of these capabilities are coming from ARKit. To go into more depth and learn how you can leverage them in your app, be sure to check out "Meet ARKit for spatial computing" session. With spaces, your app can also offer different levels of immersion, depending on which style is chosen at creation time. Jim talked a bit about the spectrum of immersion available in a Full Space. Let's dive in and learn more about how you can add more immersion into your app. Immersion style is a parameter that can be passed in your Full Space. There are two basic styles called .mixed and .full. Mixed style layers your app's content on top of passthrough. Full style hides passthrough and displays your content only. You can also combine the two by choosing progressive. This style allows some passthrough initially, but the person can change the level of immersion all the way up to full by turning the Digital Crown located on the top of the device. Let's go back to our example to explore immersion style. I'll start with the mixed style and see how that looks. And because Full Space is a SwiftUI scene, I can use RealityView to display the Earth. Here's the Earth viewed from high orbit… and here's how I displayed the scene in my app. Notice I didn't actually specify the immersion style. That's because when you create an immersive space, SwiftUI assumes mixed style by default. Let's also take your app completely immersive by adding a different immersion style. This time, I'll use immersion style ‘full’. Adding an immersive style to the end of our ImmersiveSpace is easy. We store the immersive style in our state variable and then set the type to full. Because we want to give people the choice of when they enter an immersive experience, it's a good idea to add a button to allow the person to decide if they want to enter this immersive style. Now let's see the new immersive style in action. Back in our app, I've taken Hello World from a single window to fully immersed, allowing us to view Planet Earth from any angle. And that's just the beginning of what you can do with your spatial app. Let's see where you can go from here. In this session, we've covered the fundamentals: how to get started, and then took you through the basics of building an app. We have some great sessions that should be your next stop - about the principles of spatial design, or to learn about building apps with SwiftUI and with RealityKit, or to begin creating your 3D content. With spatial computing, your app creation can venture into new, exciting avenues guided by your ingenuity. Thanks for watching! ♪
-
-
Looking for something specific? Enter a topic above and jump straight to the good stuff.
An error occurred when submitting your query. Please check your Internet connection and try again.