Streaming is available in most browsers,
and in the Developer app.
-
Create immersive Unity apps
Explore how you can use Unity to create engaging and immersive experiences for visionOS. We'll share how Unity integrates seamlessly with Apple frameworks, take you through the tools you can use to build natively for the platform, and show you how volume cameras can bring your existing scenes into visionOS windows, volumes, and spaces. Discover how to incorporate visionOS features like passthrough and scene understanding, customize your visuals with Shader Graph, and adapt your interactions to work with spatial input.
Chapters
- 0:07 - Intro
- 2:13 - Achieve your visual look
- 6:18 - Play to device
- 7:48 - Explore volume cameras
- 10:00 - Build interaction
- 13:49 - Prepare for the platform
- 15:04 - Wrap-Up
Resources
Related Videos
WWDC23
-
Download
♪ Mellow instrumental hip-hop ♪ ♪ John Calsbeek: Welcome! I'm John, and I work on RealityKit.
Vladimir Vukićević: And I'm Vlad, from Unity.
John: I'm thrilled to introduce Unity support for immersive apps.
Unity has worked with Apple to bring the full Unity experience to this new platform.
Unity is used by tens of thousands of apps, and now you can use Unity to build immersive apps.
Triband brought their Apple Arcade title What The Golf?, which is built in Unity, to this platform.
It's really fun to play on an iPhone, and it feels great to play it this way.
There are two main approaches for creating immersive experiences on this platform with Unity.
You can create experiences which mix your content with real-world objects using passthrough, either as an immersive experience or in the Shared Space alongside other apps.
You can also bring a fully immersive Unity experience to the platform.
If you're interested in this approach, I recommend you check out "Bring your Unity VR app to a fully immersive space." Creating experiences for the shared space with Unity opens up exciting opportunities for your apps.
Here's Vlad to tell you more.
Vladimir: Thanks, John.
Unity and Apple have been working together for the past two years to make sure your Unity content looks great on the platform.
Whether you're starting with an existing project, or building something completely new, Unity is a great tool for creating immersive experiences using familiar tools and some new capabilities.
On this platform, you can achieve the visual look you want using Unity's shaders and materials.
We're introducing the ability to enter play mode directly to device, improving your iteration time.
There's also a new concept called the volume camera, which controls how content from your Unity scene is brought into the real world.
Input on this new device can be as simple as a look-and-tap gesture or involve more complex interactions.
And there are a few things you can do today to prepare your Unity content for spatial computing.
Here's an example of some of these elements working together.
This scene uses materials built with Unity's Shader Graph, and it's being displayed in the shared space in the Simulator with passthrough.
There are fully rigged and animated characters, like the ogre in the back.
Physics interactions work just like you're used to.
All of the residents of this town are using character navigation to move around, and custom dynamic scripted behaviors are used to make this scene feel alive.
We put this together in two weeks with the help of the Asset Store, and it looks great when viewed in your space, where you can get up close and look at a scene from any angle.
All content in the shared space is rendered using RealityKit.
Your Unity materials and shaders need to be translated to this new environment.
Unity has created PolySpatial, which takes care of this translation for you and brings many Unity features over to this environment.
PolySpatial translates materials, regular and skinned mesh rendering, as well as particle effects and sprites.
Unity simulation features are supported, and you continue to use MonoBehaviours, scriptable objects, and other standard tools.
Three categories of materials are translated.
They are the physically based materials, custom materials, and some special effect materials.
Materials based on Unity's physically based shaders translate directly to RealityKit.
If you're using the Universal Render Pipeline, you can use any of the Lit, Simple Lit, or Complex Lit Shaders in your materials.
With the built-in pipeline, you can use the Standard Shader.
All of these are translated to a RealityKit PhysicallyBasedMaterial.
Custom shader and material types are supported through Unity Shader Graph.
Unity Shader Graphs are converted to MaterialX, a standard interchange format for complex materials.
MaterialX shaders become a ShaderGraphMaterial in RealityKit.
Many Unity Shader Graph nodes are supported, so you can create complex and interesting effects.
Handwritten shaders are not supported for rendering through RealityKit, but you can use them with RenderTextures in Unity.
You can then use that RenderTexture as a texture input to a Shader Graph for displaying through RealityKit.
Two additional material shader types are supported.
First is the Unlit Shader, which lets you create objects that take on a solid color or texture, unaffected by lighting.
The second is the Occlusion Shader, which lets passthrough show through the object.
You can use the Occlusion Shader with world mesh data to help your content feel more integrated with the real world.
Unity MeshRenderers and SkinnedMeshRenderers are supported and are the primary way of bringing visual content into real space.
Rigged characters and animation are available.
You can use either the Universal or the Built-in Render Pipeline, and your content will be translated to RealityKit via Unity PolySpatial.
Rendering features, such as postprocessing effects and custom pipeline stages, are not available, as RealityKit performs the final rendering.
Particle effects using Unity's Shuriken system are either translated to RealityKit's particle system, if they are compatible, or are translated to baked meshes.
Sprites become 3D meshes, though you should consider how you're using them in a spatial context.
PolySpatial works to optimize and translate rendering between Unity and RealityKit.
Simulation features in Unity work just like you're used to, such as Physics, Animation and Timeline, Pathfinding and NavMesh, your custom MonoBehaviours, and other non-rendering features.
To help you fine-tune your look and to speed up iteration, Unity PolySpatial enables "Play to device." It can take time to go through the build process to see what your content looks like on your device.
With PolySpatial, you have access to Play to device for the first time.
Play to device lets you see an instant preview of your scene and make live changes to it.
It works in the simulator and works great on device as well.
You can use Play to device to rapidly explore the placement and size of your content, including adding and removing elements.
You can change materials, textures, and even Shader Graphs to fine-tune your look while seeing your content in place with passthrough.
You can test out interaction because events are sent back to the editor.
Your simulation continues to run, so it's easy to debug just by attaching to the editor.
Here's the same castle scene you saw earlier.
I have it open in Unity on the left, and with Play to device, I can see it running in the simulator on the right.
I can add more ogres just by dragging them into my scene.
They're instantly visible in the simulator or device.
If I want to see how pink or neon green ogres look, I can.
Play to device is a really efficient workflow for iterating on your content, and it's currently available in Unity only for creating content in the shared space.
Because you're using Unity to create volumetric content that participates in the shared space, a new concept called a volume camera lets you control how your scene is brought into the real world.
A volume camera can create two types of volumes, bounded and unbounded, each with different characteristics.
Your application can switch between the two at any time.
Bounded volumes exist in the shared space as volumes, alongside other apps and games.
They have dimensions and a transform in Unity, as well as a specific real-world size.
They can be repositioned, but people cannot resize them.
The dimensions and the transform of the volume camera define the region of your scene that your app will display in a volume.
They're specified in scene units.
You can see a preview of the volume in green in Unity's scene view.
By manipulating the dimensions and the transform of the volume camera, different parts of the scene can be brought into the volume.
If I move or rotate the camera, new objects become visible in my space.
If I scale up its size, more of the scene comes into view.
In both cases, the volume remains the same size.
Only the content visible inside of it changes.
Notice that in the initial placement of the volume camera, the spring intersects the side of the volume; content is clipped by RealityKit.
If you will have content intersecting the edges of your volume, consider placing the same mesh in your scene a second time with a back-facing material to fill in the clipped sections.
Your unbounded volume displays in a full space on this platform and allows your content to fully blend with passthrough for a more immersive experience.
It has no dimensions because it selects your entire scene, and its transform specifies how your scene units are mapped to real-world units.
There can only be one unbounded volume camera active at a time.
You'll see an example of an unbounded volume when we talk about interaction.
Unity supports multiple input types for apps on this platform.
On this platform, people use their eyes and hands to look at content and tap their fingers together to select it.
Full hand tracking as well as head pose data lets you create realistic interactions.
Augmented reality data from ARKit is available, as are Bluetooth devices such as keyboards and game controllers.
The tap gesture is the most common way of interacting with content on this platform.
In order for your objects to receive these events, they must have input colliders configured.
You can look and tap to select an object from a distance, or you can reach out and directly touch an object with a finger.
Up to two simultaneous tap actions can be in progress.
In Unity, taps are available as WorldTouch events.
They are similar to 2D tap events, but have a full 3D position.
Hand and head pose tracking gives your application precise information about each hand joint and the viewer's head position relative to the global tracking origin.
Low-level hand data is provided via Unity's Hands package, and head pose is provided through the Input System.
Both of these are available in unbounded volumes only, and accessing hand tracking requires your application to request permission to receive the data.
Augmented reality data such as detected planes, the world mesh, and image markers are available through ARKit and Unity's AR Foundation.
Like hands and head pose, AR data is only available in unbounded volumes and requires extra permissions.
Finally, Bluetooth devices such as keyboards, controllers, and other supported devices are available for you to access through Unity's Input System.
Because some types of input are only available in unbounded volumes, you'll need to decide what type of interaction you would like to build.
Using look and tap will allow your content to work in a bounded volume that can live alongside other applications, but if you need access to hand tracking or augmented reality data, you'll need to use an unbounded volume and request permissions.
Each of these is delivered to your Unity application via an appropriate mechanism.
This sample uses tapping, hand tracking, and plane detection in an unbounded volume scene.
You can look at a surface found via ARKit plane detection and create flowers by dragging your finger along it.
The flowers are painted using hand tracking, and you can tap to grow flowers.
Flowers that you grow react to hand movement using Unity's physics system.
By incorporating the real world into your content in this way, you can create a much deeper sense of immersion.
The best way to adapt your existing interactions depends on the type.
If you are already working with touch, such as on an iPhone, you can add appropriate input colliders and continue to use tap as your primary input mechanism.
If you are using VR controllers, you will have to redefine your interactions in terms of either tap or hand-based input, depending on how complex they are.
Existing hand-based input should work without changes.
And if you have existing UI panels using one of Unity's UI systems, you can bring them to this platform.
User interface elements built using uGUI, as well as UI Toolkit, are supported.
If you are using other UI systems, they will work as long as they use meshes and MeshRenderers or draw to a RenderTexture, which is then placed on a mesh.
Support for spatial computing on Apple platforms will be coming soon in beta based on Unity 2022.
However, you can start getting your content ready today.
If you're starting a new project, use Unity 2022 or later.
If you have an existing project, start upgrading to 2022.
If you have handwritten shaders in your project, start converting them to Shader Graph.
Consider adopting the Universal Render Pipeline.
While the built-in graphics pipeline is supported, all future improvements will be on the Universal pipeline.
If you haven't yet, start using the Input System package.
Mixed-mode input is supported, but platform events are only delivered through the Input System.
Finally, start thinking about how you can bring an existing app or game to spatial computing, or what new experience you'd like to create.
Consider whether your idea fits in the shared space to give people more flexibility, or if your app needs the power of a full space.
To get more information about Unity's support for this platform, and to sign up for early beta access, please visit unity.com/spatial.
I'm excited to see all the amazing things that you'll create with Unity and this new device.
John: Unity is an amazing way to hit the ground running and build immersive apps.
And it works great with RealityKit on this new platform.
You can get started today preparing your projects.
If you want to create a fully immersive experience with Unity, I recommend the session "Bring your Unity VR app to a fully immersive space." And don't miss "Build great games for spatial computing" to get an overview of game development technologies for this platform.
We can't wait to see what you create.
Vladimir: Thanks for watching.
♪
-
-
Looking for something specific? Enter a topic above and jump straight to the good stuff.
An error occurred when submitting your query. Please check your Internet connection and try again.