Streaming is available in most browsers,
and in the Developer app.
-
Explore rendering for spatial computing
Find out how you can take control of RealityKit rendering to improve the look and feel of your apps and games on visionOS. Discover how you can customize lighting, add grounding shadows, and control tone mapping for your content. We'll also go over best practices for two key treatments on the platform: rasterization rate maps and dynamic content scaling.
Chapters
- 0:00 - Introduction
- 1:15 - Lighting and shadows
- 5:05 - Materials
- 10:09 - Rasterization rate maps
- 13:13 - Dynamic content scaling
- 16:01 - Wrap-up
Resources
Related Videos
WWDC23
-
Download
♪ Mellow instrumental hip-hop ♪ ♪ Hello! I'm Ivan, and I'm an engineer on the RealityKit team.
Welcome to my session, "Explore rendering for spatial computing." RealityKit is a framework for rendering, animating, and simulating 3D models.
One of the strongest suits of RealityKit is applying realistic rendering for your content.
In order to help you make the most of the rendering abilities of RealityKit and enhance the look of your content, I wanted to share some rendering considerations to keep in mind while developing your app for spatial computing.
We'll start with lighting and shadows for your 3D content.
Then we'll learn what's new with RealityKit materials.
Next, I will introduce rasterization rate maps which greatly improves system performance.
I will share recommendations on how to adjust your content to make it work well with this optimization.
Finally, I will introduce a technique called dynamic content scaling, which ensures that the UI is always sharp.
Let's start with lighting and shadows.
If you are familiar with RealityKit on iOS and macOS, you will find that most of that knowledge also applies to building spatial experiences.
We introduced image-based lighting in RealityKit to make your content look realistic.
Image-based lighting, or IBL, uses textures, like the one on the right to produce realistic reflections.
Shadows help us understand how objects are positioned with respect to each other.
Before we look at the new features, let's quickly go over the components of image-based lighting.
There are two main components to an IBL: an Environment probe texture that is provided by ARKit and is specific to the physical space in the room and the system IBL texture which is packaged with the OS.
The system IBL texture adds extra highlights to ensure that your content looks great in any environment.
The two components are added together to produce the combined IBL texture.
If you have an active environment, it would also have an effect on the combined IBL texture.
This year RealityKit adds ability to override the system IBL texture in order to customize lighting.
Let's take a look at an example.
This is the "Hello World" experience that offers a view of the solar system.
By default RealityKit would light it using the system IBL.
However, if you assign a new IBL to the new image-based light component, it would replace the system IBL and light those objects using the surrounding immersive environment.
Let me show you how that's done.
Here we first load our 3D content.
In this case, it's the satellite model.
Then we load an environment resource called Sunlight.
It contains an image of the Sun and stars surrounding the Earth.
We need both the model and the environment resource to set up IBL, so let's make sure both loading operations have finished.
Next, we add the ImageBasedLightComponent.
It references the Environment resource that we've just loaded.
Finally we add ImageBasedLightReceiverComponent to the satellite entity.
You can add these receiver components even to other entities in order to light them using the same IBL.
And that's how easy it is to customize lighting in RealityKit.
Next, let's take a look at how to add shadows to your application.
Let's consider a simple example where you place a 3D object like this vase on top of a floating plane.
Without any shadows turned on, it might be hard to understand the relative position of the vase and the plane.
But by simply adding RealityKit's grounding shadow, it becomes a lot clearer that the vase is above the center of the plane.
Let's see how to do this in code.
We start by loading the vase model.
Here, flower_tulip is the name of our 3D model in our project.
Next, we add the grounding shadow component.
Make sure to set castsShadow flag set to true.
And that's it! The vase entity will now cast grounding shadows.
Simple, isn't it? Grounding shadows appear on top of 3D models as well as objects in the physical environment.
Using a custom IBL for lighting your scene and including grounding shadows can make your content look a lot better, but you could also directly work on the look of your objects by tweaking materials.
Most of the RealityKit materials that are available on macOS and iOS can also be used on xrOS.
Let's quickly review them.
The most commonly used material is PhysicallyBasedMaterial.
PhysicallyBasedMaterial in RealityKit reacts to lighting and can be used to represent a variety of real-world materials, such as plastics or metals.
SimpleMaterial also reacts to lighting, but uses a smaller subset of parameters.
It is especially good for quick experiments.
UnlitMaterial doesn't react to lighting.
In other words, it maintains a constant look under changing lighting conditions.
VideoMaterial is a variation of unlit material that can map a movie file onto the surface of an entity.
In addition to these materials, RealityKit introduces a new type of material called ShaderGraphMaterial.
You can author the new ShaderGraphMaterial in Reality Composer Pro or load it from a MaterialX file.
You can learn more about ShaderGraphMaterial in the session "Explore Materials in Reality Composer Pro." The color output of all of these materials goes through a special step called tone mapping.
Tone mapping is a transformation that RealityKit applies by default to the color output of a material.
It enables more natural perceived colors using a variety of techniques.
One such technique is remapping values above one into the visible range.
Let me demonstrate this with an example.
Here's a 3D render of a TV with tone mapping disabled.
I assigned a texture with very bright values to the display.
Now, if I enable tone mapping, you can see more details in the bright regions, like these flower petals.
Tone mapping works great in general and renders beautiful visuals; but for some use cases, you may want to display the exact colors of the object, for which you will have to opt out of tone mapping.
Let's look at an example.
Here's a simple application that shows a traffic light and three buttons with labels "Stop," "Wait," and "Go." The traffic light itself is a 3D model, and the three buttons were added using SwiftUI.
In order to precisely match the color of the lamp to the color of the button, we could use an unlit material for the lamps, since unlit materials maintain the same constant look of the object, independent of the lighting conditions.
However, the output of unlit material is still affected by tone mapping which is on by default for all of RealityKit materials.
So, even if the same color is assigned to the SwiftUI button and the material of the lamp, they may appear slightly different from each other.
The screenshot you see was taken with tone mapping enabled; let me show you what it looks like when tone mapping is disabled for the lamp material.
You will notice that the colors of lamps and buttons accurately match.
Let's toggle tone mapping for lamp material one more time.
This is with tone mapping enabled and this is with tone mapping disabled.
Let's take a look at the code sample that shows how tone mapping can be toggled in code.
We start by loading the traffic light model.
Here, traffic_light is the name of our 3D model in our project.
Next, we find the entity named red_light.
This entity corresponds to the top lamp of the traffic light.
Once we have the entity, we access its model component.
Next, we create a new unlit material.
We pass both our desired color and a new Boolean parameter called applyPostProcessToneMap.
This Boolean parameter is set to false in order to disable tone mapping transformation for this material.
Finally, we replace material on the model component and assign the model component back to the entity.
This is done for each of the three lamps.
Now the color of button and color of lamps should match closely.
applyPostProcessToneMap flag is useful in cases when you want to show an exact representation of the colors in your scene.
This can come in handy when using RealityKit to build something like a menu or a heads-up display.
This new property is also exposed in the material editor of Reality Composer Pro.
Now, let's take a look at some quality considerations.
We'll start with the rasterization rate maps for spatial computing.
The displays used in the headset have a high resolution, and the OS needs to update these displays many times a second.
Let me explain this with a visual.
As you may know already, the headset has the ability to detect exactly where a person's eyes are looking.
Here's a simulated scenario where a person moves their eyes to the right and then back to the center.
The yellow circle represents the center point of the person's focus.
The area that is surrounding that point is highlighted with a glow, and the periphery is darkened.
Rasterization rate map makes it so that fewer calculations are performed in the areas that are darkened.
You can see that at any given moment the highlighted region is small in comparison to the periphery.
This allows the system to achieve significant memory and performance savings.
In RealityKit, this optimization is automatically enabled for you.
While it greatly improves the system performance, in some situations you may have to adjust your content to make it work well with this optimization at play.
For example, here's a palm leaf asset, When placed in the center of the screen, it looks sharp and detailed.
But when I move the object to the left and apply the eye movement simulation again, you can observe flickering on the palm leaf.
The flickering is especially strong when the yellow circle representing the eye direction is close to the right edge of the screen.
The flickering happens because the rasterization rate map enables higher detail around the point where the person is looking, and the pixels around the palm leaf are rendered at a lower detail as the eyes move away from it.
Now, you can reduce the flickering by simply adjusting a few parameters of your content.
Let's take a look at this.
Here's a representation of the same palm leaf asset with a red wireframe overlay on top.
You can see that there are a lot of small triangles here.
These small triangles were the reason of flickering in the periphery.
We can reduce the flickering by simply making the triangles larger and storing the fine details in an opacity texture.
Here's how the simulation looks after adjusting the asset.
This 3D model looks better after adjustment, because RealityKit automatically generates lower-resolution versions of the opacity map when the asset is loaded.
Those lower-resolution versions of the texture are called mipmaps and automatically used by the GPU to improve the look in the lower-detail region.
For more details on rasterization rate maps, please refer to the article "Rendering at Different Rasterization Rates." Similar to rasterization rate maps, there is another technique called "dynamic content scaling" that automatically improves the look of content that was authored using SwiftUI.
Let's take a look.
Here's an application that displays a list of months arranged in a grid.
Each month is represented with a text label.
When the eyes look at the month of June, the system rasterizes the text in that area at the highest level of detail.
The area marked in blue surrounding "June" will be rasterized at a slightly reduced level of detail, but still maintains a high quality overall.
The area marked in purple, however, is rasterized at a much lower level of detail since human vision system perceives fewer details in the periphery and it wouldn't be as noticeable.
This kind of rasterization at variable levels of detail based on what the eyes are looking at is called "dynamic content scaling." The system relies on dynamic content scaling to draw UI content at the right scale and ensures that it's always sharp.
Dynamic content scaling affects the relative size in memory for the rasterized content.
In other words, our text labels are scaled to different sizes depending on how close they are to the point where the eyes are looking at.
For example, you can see that the label that says "June" is the largest -- it has the most resolution and detail.
Then there is a group of eight months -- January, February, March, and so on that have slightly less detail.
Finally, there is a group of three months -- April, August, December -- that are farthest away from eye look-at direction.
That last group would be represented with smaller images in memory.
Now, let's understand how to enable dynamic content scaling.
If you are using UIKit and SwiftUI, your application will automatically benefit from this technique.
If you are relying on the Core Animation framework to build your UI, there is a new API to enable dynamic content scaling.
Let's take a look at this API.
Dynamic content scaling can be enabled by setting the property of CALayer wantsDynamicContentScaling to true.
Note that this technique relies on rasterizing at higher resolutions, so it is not recommended to use with primarily bitmap-based content.
You can find the full list of recommendations regarding dynamic content scaling on developer.apple.com.
Let me summarize everything we've learned.
We started by looking at how to add image-based lights and grounding shadows to RealityKit applications.
Then we reviewed materials that are available for spatial experiences, including the new ShaderGraphMaterial.
And we've also learned how to control tone mapping for unlit material.
Next we learned how rasterization rate maps are used for spatial computing, including an example how to adjust 3D model to reduce flickering in the periphery.
Finally, we learned how dynamic content scaling works on the system and how you can make use of it.
We're very excited about this year's release and can't wait to see the beautiful spatial experiences you build on xrOS.
Thank you.
♪
-
-
3:05 - Image based lighting
RealityView { content in async let satellite = Entity(named: "Satellite", in: worldAssetsBundle) async let environment = EnvironmentResource(named: "Sunlight") if let satellite = try? await satellite, let environment = try? await environment { content.add(satellite) satellite.components.set(ImageBasedLightComponent( source: .single(environment))) satellite.components.set(ImageBasedLightReceiverComponent( imageBasedLight: satellite)) } }
-
4:28 - Grounding shadows
RealityView { content in if let vase = try? await Entity(named: "flower_tulip") { content.add(vase) vase.components.set(GroundingShadowComponent(castsShadow: true)) } }
-
8:48 - Disable tone mapping
RealityView { content in if let trafficLight = try? await Entity(named: "traffic_light") { content.add(trafficLight) if let lamp = trafficLight.findEntity(named: "red_light") { if var model = lamp.components[ModelComponent.self] { let material = UnlitMaterial(color: .init(color), applyPostProcessToneMap: false) model.materials = [material] lamp.components[ModelComponent.self] = model } } } }
-
15:34 - Dynamic content scaling
// Enable dynamic content scaling on CALayer with: var wantsDynamicContentScaling: Bool { get set }
-
-
Looking for something specific? Enter a topic above and jump straight to the good stuff.
An error occurred when submitting your query. Please check your Internet connection and try again.