Streaming is available in most browsers,
and in the Developer app.
-
Meet ARKit for spatial computing
Discover how you can use ARKit's tracking and scene understanding features to develop a whole new universe of immersive apps and games. Learn how visionOS and ARKit work together to help you create apps that understand a person's surroundings — all while preserving privacy. Explore the latest updates to the ARKit API and follow along as we demonstrate how to take advantage of hand tracking and scene geometry in your apps.
Resources
Related Videos
WWDC23
- Build spatial experiences with RealityKit
- Develop your first immersive app
- Discover Metal for immersive apps
- Enhance your spatial computing app with RealityKit
- Evolve your ARKit app for spatial experiences
- Meet SwiftUI for spatial computing
- Optimize app power and performance for spatial computing
- What’s new in App Store Connect
-
Download
♪ Mellow instrumental hip-hop ♪ ♪ Ryan Taylor: Hello! My name is Ryan. Conner Brooks: And I'm Conner. Ryan: In this session, we are going to introduce you to ARKit for spatial computing. We will discuss the critical role that it plays on this new platform and how you can leverage it to build the next generation of apps. ARKit uses sophisticated computer vision algorithms to construct an understanding of the world around you, as well as your movements. We first introduced this technology in iOS 11 as a way for developers to create amazing augmented reality experiences that you can use in the palm of your hand. On this platform, ARKit has matured into a full-blown system service, rebuilt from the ground up with a new real-time foundation. ARKit is deeply woven into the fabric of the entire operating system, powering everything from interacting with a window, to playing an immersive game. As part of this journey, we have also given our API a complete overhaul. The new design is a result of everything that we learned on iOS, plus the unique needs of spatial computing, and we think you are going to love it. ARKit provides a variety of powerful features that you can combine to do incredible things, such as place virtual content on a table. You can reach out and touch the content, as if it is really there, and then watch as the content interacts with the real world. It is truly a magical experience. Now that you have seen a glimpse of what can be accomplished using ARKit on this new platform, let me walk you through our agenda. We will begin with an overview of the fundamental concepts and building blocks that make up our API. Next, we will dive into world tracking, which is essential for placing virtual content relative to the real world. Then, we will explore our scene understanding features, which provide useful information about your surroundings. After that, we will introduce you to our newest feature, hand tracking, an exciting new addition that you can leverage for placing virtual content relative to your hands or building other types of bespoke interactions. And lastly, we will come full circle and take a look at a practical application of some of these features by examining code from the video that we showed you a moment ago. All right, let's get started! Our new API has been meticulously crafted in two invigorating flavors, modern Swift and classic C. All ARKit features are now provided à la carte. We wanted developers to have as much flexibility as possible, so that you can simply pick and choose what you need to build your experience. Access to ARKit data has been designed with a privacy-first approach. We have put safeguards into place to protect people's privacy, while also maintaining simplicity for developers. The API consists of three fundamental building blocks: sessions, data providers, and anchors. Let's begin with anchors and then work our way back up to sessions. An anchor represents a position and orientation in the real world. All anchors include a unique identifier, as well as a transform. Some types of anchors are also trackable. When a trackable anchor is not being tracked, you should hide any virtual content that you have anchored with it. A data provider represents an individual ARKit feature. Data providers allow you to poll for or observe data updates, such as anchor changes. Different types of data providers provide different types of data. A session represents a combined set of ARKit features that you would like to use together for a particular experience. You run a session by providing it with a set of data providers. Once the session is running, the data providers will begin receiving data. Updates arrive asynchronously and at different frequencies, depending on the type of data. Let's move on now and talk about privacy and how your app gets access to ARKit data. Privacy is a fundamental human right. It is also one of our core values. ARKit's architecture and API have been thoughtfully designed to protect people's privacy. In order for ARKit to construct an understanding of the world around you, the device has many cameras and other types of sensors. Data from these sensors, such as camera frames, is never sent to client space. Instead, sensor data is sent to ARKit's daemon for secure processing by our algorithms. The resulting data that is produced by these algorithms is then carefully curated before being forwarded to any clients that are requesting the data, such as your app. There are a few prerequisites to accessing ARKit data. First, your app must enter a Full Space. ARKit does not send data to apps that are in the Shared Space. Second, some types of ARKit data require permission to access. If the person does not grant permission, then we will not send that type of data to your app. To facilitate this, ARKit provides a convenient authorization API for handling permission. Using your session, you can request authorization for the types of data that you would like to access. If you do not do this, ARKit will automatically prompt the person for permission when you run the session, if necessary. Here, we are requesting access to hand tracking data. You can batch all of the authorization types that you need together in a single request. Once we have the authorization results, we iterate over them and check the status for each authorization type. If the person has granted permission, the status will be allowed. Attempting to run a session with a data provider that provides data that the person has denied access to will result in the session failing. Now, let's take a closer look at each of the features that ARKit supports on this platform, starting with world tracking. World tracking allows you to anchor virtual content in the real world. ARKit tracks the device's movement in six degrees of freedom and updates each anchor, so that they stay in the same place relative to your surroundings. The type of DataProvider that world tracking uses is called WorldTrackingProvider, and it gives you several important capabilities. It allows you to add WorldAnchors, which ARKit will then update to remain fixed relative to people's surroundings as the device moves around. WorldAnchors are an essential tool for virtual content placement. Any WorldAnchors that you add are automatically persisted across app launches and reboots. If this behavior is undesirable for the experience that you are building, you can simply remove the anchors when you are done with them, and they will no longer be persisted. It is important to note that there are some cases where persistence will not be available. You can also use a WorldTrackingProvider to get the device's pose relative to the app origin, which is necessary if you are doing your own rendering using Metal. Let's begin by taking a closer look at what a WorldAnchor is and why you would want to use one. A WorldAnchor is a TrackableAnchor with an initializer that takes a transform, which is the position and orientation that you would like to place the anchor at, relative to the app's origin. We have prepared an example to help visualize the difference between virtual content that is not anchored versus content that is anchored. Here we have two cubes. The blue cube on the left is not being updated by a WorldAnchor, while the red cube on the right is being updated by a WorldAnchor. Both cubes were placed relative to the app's origin when the app was launched. As the device moves around, both cubes remain where they were placed. You can press and hold the crown to recenter the app. When recentering occurs, the app's origin will be moved to your current location. Notice that the blue cube, which is not anchored, relocates to maintain its relative placement to the app's origin; while the red cube, which is anchored, remains fixed relative to the real world. Let's take a look at how WorldAnchor persistence works. As the device moves around, ARKit builds a map of your surroundings. When you add WorldAnchors, we insert them into the map and automatically persist them for you. Only WorldAnchor identifiers and transforms are persisted. No other data, such as your virtual content, is included. It is up to you to maintain a mapping of WorldAnchor identifiers to any virtual content that you associate with them. Maps are location based, so when you take your device to a new location -- for instance, from home to the office -- the map of your home will be unloaded, and then a different map will be localized for the office. Any anchors that you add at this new location will go into that map. When you leave the office at the end of the day and head home, the map that ARKit has been building at the office, along with any anchors that you placed there, will be unloaded. Once again, though, we have been automatically persisting the map along with your anchors. Upon returning home, ARKit will recognize that the location has changed, and we will begin the process of relocalizing by checking for an existing map for this location. If we find one, we will localize with it, and all of the anchors that you previously added at home will become tracked once again. Let's move on to the device pose. Along with adding and removing WorldAnchors, you can also use a WorldTrackingProvider to get the pose of the device. The pose is the position and orientation of the device relative to the app's origin. Querying the pose is required if you are doing your own rendering with Metal and CompositorServices in a fully immersive experience. This query is relatively expensive. Exercise caution when querying for the device pose for other types of app logic, such as content placement. Let's quickly walk through a simplified rendering example to demonstrate how you can provide device poses from ARKit to CompositorServices. We have a Renderer struct that will hold our session, world tracking provider, and latest pose. When initializing the Renderer, we start by creating a session. Next, we create a world tracking provider, which we will use to query for the device pose when we render each frame. Now, we can go ahead and run our session with any data providers that we need. In this case, we are only using a world tracking provider. We also create a pose to avoid allocating in the render function. Let's jump over to our render function now, which we will be calling at frame rate. Using the drawable from CompositorServices, we fetch the target render time. Next, we use the target render time to query for the pose of the device. If successful, we can extract a transform of the pose relative to the app's origin. This is the transform to use for rendering your content. Lastly, before we submit the frame for compositing, we set the pose on the drawable, so that the compositor knows which pose we used to render content for the frame. For more information on doing your own rendering, see the dedicated session for using Metal to create immersive apps. Additionally, there is a great session on spatial computing performance considerations that we encourage you to check out as well. Next, let's take a look at scene understanding. Scene understanding is a category of features that inform you about your surroundings in different ways. Let's begin with plane detection. Plane detection provides anchors for horizontal and vertical surfaces that ARKit detects in the real world. The type of DataProvider that plane detection uses is called PlaneDetectionProvider. As planes are detected in your surroundings, they are provided to you in the form of PlaneAnchors. PlaneAnchors can be used to facilitate content placement, such as placing a virtual object on a table. Additionally, you can use planes for physics simulations where basic, flat geometry, such as a floor or wall, is sufficient. Each PlaneAnchor includes an alignment, which is horizontal or vertical; the geometry of the plane; and a semantic classification. Planes can be classified as a variety of different types of surfaces, such as floor or table. If we are unable to identify a particular surface, the provided classification will be marked as either unknown, undetermined, or not available, depending on the circumstances. Now, let's move on to scene geometry. Scene geometry provides anchors containing a polygonal mesh that estimates the shape of the real world. The type of DataProvider that scene geometry uses is called SceneReconstructionProvider. As ARKit scans the world around you, we reconstruct your surroundings as a subdivided mesh, which is then provided to you in the form of MeshAnchors. Like PlaneAnchors, MeshAnchors can be used to facilitate content placement. You can also achieve higher-fidelity physics simulations in cases where you need virtual content to interact with objects that are not just simple, flat surfaces. Each MeshAnchor includes geometry of the mesh. This geometry contains vertices, normals, faces, and semantic classifications, which are per face. Mesh faces can be classified as a variety of different types of objects. If we are unable to identify a particular object, the provided classification will be none. Lastly, let's take a look at image tracking. Image tracking enables you to detect 2D images in the real world. The type of DataProvider that image tracking uses is called ImageTrackingProvider. You configure ImageTrackingProvider with a set of ReferenceImages that you want to detect. These ReferenceImages can be created in a few different ways. One option is to load them from an AR resource group in your project's asset catalog. Alternatively, you can also initialize a ReferenceImage yourself by providing a CVPixelBuffer or CGImage. When an image is detected, ARKit provides you with an ImageAnchor. ImageAnchors can be used to place content at known, statically placed images. For instance, you can display some information about a movie next to a movie poster. ImageAnchors are TrackableAnchors that include an estimated scale factor, which indicates how the size of the detected image compares to the physical size that you specified and the ReferenceImage that the anchor corresponds to. Now, to tell you about our new feature, hand tracking, and then walk you through the example, here is Conner. Conner: Howdy. Let's take a look at hand tracking, a brand-new addition to ARKit. Hand tracking provides you with anchors containing skeletal data for each of your hands. The type of DataProvider that hand tracking uses is called HandTrackingProvider. When your hands are detected, they are provided to you in the form of HandAnchors. A HandAnchor is a TrackableAnchor. HandAnchors include a skeleton and a chirality. The chirality tells us whether this is a left or a right hand. A HandAnchor's transform is the wrist's transform relative to the app origin. The skeleton consists of joints, which can be queried by name. A joint contains its parent joint; its name; a localTransform, which is relative to its parent joint; a rootTransform, which is relative to the root joint; and finally, each joint contains a bool, which indicates whether or not this joint is tracked. Here we enumerate all of the available joints in the hand skeleton. Let's walk through a subset of the joint's hierarchy. The wrist is the root joint for the hand. For each finger, the first joint is parented to the wrist; for example, 1 is parented to 0. Subsequent finger joints are parented to the previous joint; for example, 2 is parented to 1, and so on. HandAnchors can be used to place content relative to your hands or detect custom gestures. There are two options for receiving HandAnchors -- you can either poll for updates or receive anchors asynchronously when they are available. We'll take a look at asynchronous updates in our Swift example later on, so let's add hand anchor polling to our renderer from earlier. Here's our updated struct definition. We've added a hand tracking provider, along with a left and right hand anchor. In our updated init function, we create our new hand tracking provider and add it to the list of providers that we run; we then create the left and right hand anchors that we'll need when we poll. Note, we create these ahead of time in order to avoid allocating in the render loop. With our struct updated and initialized, we can call get_latest_anchors in our render function. We pass the provider and our preallocated hand anchors. Our anchors will be populated with the latest available data. With our latest anchors populated, we can now use their data in our experience. Very cool. Now it's time to revisit the example we showed you earlier. We used a combination of ARKit and RealityKit features to build this experience. Scene geometry was used as colliders for physics and gestures, while hand tracking was used to directly interact with the cube entities. Let's take a look at how we built this example. First, we'll check out the app struct and view model. Next, we'll initialize the ARKit session. Then we'll add colliders for our fingertips and colliders from scene reconstruction. Finally, we'll look at how to add cubes with gestures. Let's just jump right into it. Here is our app, TimeForCube. We have a relatively standard SwiftUI app and scene setup. Within our scene, we declare an ImmersiveSpace. The IimmersiveSpace is required as we'll need to move to a Full Space in order to get access to ARKit data. Within the ImmersiveSpace, we define a RealityView which will present the content from our view model. The view model is where most of our app's logic will live. Let's take a quick look. The view model holds onto the ARKit session; the data providers we'll be using; our content entity, which will contain all other entities that we create; and both our scene and hand collider maps. Our view model also provides various functions that we'll call from the app. We'll go through each of these in context from the app. The first function we'll call is within our RealityView's make closure to set up the contentEntity. We'll add this entity to the content of our RealityView, so that the view model can add entities to the view's content. setupContentEntity simply adds all the finger entities in our map as children of the contentEntity and then returns it. Nice! Let's move on to session initialization. Our session initialization runs in one of three tasks. Our first task calls the runSession function. This function simply runs the session with our two providers. With the session running, we can start receiving anchor updates. Let's create and update our fingertip colliders we'll use to interact with cubes. Here is our task for processing hand updates. Its function iterates over the async sequence of anchor updates on the provider. We ensure that the hand anchor is tracked, get the index fingertip joint, and check that the joint itself is also tracked. We then compute the transform of the tip of the index finger relative to the app origin. Finally, we look up which finger entity we should update and set its transform.
Let's revisit our finger entity map. We create an entity per hand via an extension to ModelEntity, This extension creates a 5mm sphere with a collision shape. We add a kinematic physics body component and hide this entity by adding an opacity component. Though we'll hide these for our use case, it'd be nice to visualize our fingertip entities to verify that everything is working as expected. Let's temporarily set our opacity to one and make sure our entities are in the right place. Great! We can see spheres right where our fingertips are! Notice, our hands are partially covering up the spheres. This is called hand occlusion, a system feature that allows a person to see their hands on top of virtual content. This is enabled by default, but if we'd like to see our sphere a little more clearly, we can configure hand occlusion visibility by using the upperLimbVisibility setter on our scene. If we set limb visibility to hidden, we'll see the entire sphere regardless of where our hands are. For our example, we'll leave the upper limb visibility as the default value and set the opacity back to zero. Neat! Now let's add scene colliders -- we'll use these for physics and as gesture targets. Here's the task that calls the function on our model. We iterate over the async sequence of anchor updates on the provider, attempt to generate a ShapeResource from the MeshAnchor, then switch on the anchor update's event. If we're adding an anchor, we create a new entity, set its transform, add a collision and physics body component, then add an input target component so that this collider can be a target for gestures. Finally, we add a new entity to our map and as a child of our content entity. To update an entity, we retrieve it from the map, then update its transform and collision component shape. For removal, we remove the corresponding entity from its parent and the map. Now that we have hand and scene colliders, we can use gestures to add cubes. We add a SpatialTapGesture targeted to any entity, which will let us know if someone has tapped on any entity in our RealityView's content. When that tap has ended, we receive a 3D location we convert from global to scene coordinates. Let's visualize this location. Here's what we'd see if we added a sphere at the location of the tap. Now, we tell our view model to add a cube relative to this location. To add a cube, we first calculate a placement location that's 20 centimeters above the tap location. We then create the cube and set its position to our calculated placement location. We add an InputTargetComponent, which allows us set which types of gestures our entity will respond to. For our use case, we'll allow only indirect input types for these cubes, as our fingertip colliders will provide direct interaction. We add a PhysicsBodyComponent with custom parameters to make the physics interactions a bit nicer. Last, we add our cube to the content entity, which means it is finally time for cube. Let's take one last look at our example, end to end. Every time we tap on the scene colliders or a cube, a new cube is added above the tap location. The physics system causes the cube to drop onto the scene colliders, and our hand colliders allow us to interact with the cubes. For more information about RealityKit, check out the introductory session on using RealityKit for spatial computing. And, if you already have an existing ARKit experience on iOS that you're interested in bringing over to this platform, be sure to watch the dedicated session on this topic for further guidance. Our entire team is incredibly thrilled for you to get your hands on the new version of ARKit. We cannot wait to see all of the groundbreaking apps that you will create for this exciting new platform. Ryan: Thanks for watching! ♪
-
-
5:20 - Authorisation API
// Privacy // Authorization session = ARKitSession() Task { let authorizationResult = await session.requestAuthorization(for: [.handTracking]) for (authorizationType, authorizationStatus) in authorizationResult { print("Authorization status for \(authorizationType): \(authorizationStatus)") switch authorizationStatus { case .allowed: // All good! break case .denied: // Need to handle this. break ... } } }
-
10:20 - World Tracking Device Pose Render Struct
// World tracking // Device pose #include <ARKit/ARKit.h> #include <CompositorServices/CompositorServices.h> struct Renderer { ar_session_t session; ar_world_tracking_provider_t world_tracking; ar_pose_t pose; ... }; void renderer_init(struct Renderer *renderer) { renderer->session = ar_session_create(); ar_world_tracking_configuration_t config = ar_world_tracking_configuration_create(); renderer->world_tracking = ar_world_tracking_provider_create(config); ar_data_providers_t providers = ar_data_providers_create(); ar_data_providers_add_data_provider(providers, renderer->world_tracking); ar_session_run(renderer->session, providers); renderer->pose = ar_pose_create(); ... }
-
10:21 - World Tracking Device Pose Render function
// World tracking // Device pose void render(struct Renderer *renderer, cp_layer_t layer, cp_frame_t frame_encoder, cp_drawable_t drawable) { const cp_frame_timing_t timing_info = cp_drawable_get_frame_timing(drawable); const cp_time_t presentation_time = cp_frame_timing_get_presentation_time(timing_info); const CFTimeInterval target_render_time = cp_time_to_cf_time_interval(presentation_time); simd_float4x4 pose = matrix_identity_float4x4; const ar_pose_status_t status = ar_world_tracking_provider_query_pose_at_timestamp(renderer->world_tracking, target_render_time, renderer->pose); if (status == ar_pose_status_success) { pose = ar_pose_get_origin_from_device_transform(renderer->pose); } ... cp_drawable_set_ar_pose(drawable, renderer->pose); ... }
-
16:00 - Hand tracking joints
/ Hand tracking @available(xrOS 1.0, *) public struct Skeleton : @unchecked Sendable, CustomStringConvertible { public func joint(named: SkeletonDefinition.JointName) -> Skeleton.Joint public struct Joint : CustomStringConvertible, @unchecked Sendable { public var parentJoint: Skeleton.Joint? { get } public var name: String { get } public var localTransform: simd_float4x4 { get } public var rootTransform: simd_float4x4 { get } public var isTracked: Bool { get } } }
-
17:00 - Hand tracking with Render struct
// Hand tracking // Polling for hands struct Renderer { ar_hand_tracking_provider_t hand_tracking; struct { ar_hand_anchor_t left; ar_hand_anchor_t right; } hands; ... }; void renderer_init(struct Renderer *renderer) { ... ar_hand_tracking_configuration_t hand_config = ar_hand_tracking_configuration_create(); renderer->hand_tracking = ar_hand_tracking_provider_create(hand_config); ar_data_providers_t providers = ar_data_providers_create(); ar_data_providers_add_data_provider(providers, renderer->world_tracking); ar_data_providers_add_data_provider(providers, renderer->hand_tracking); ar_session_run(renderer->session, providers); renderer->hands.left = ar_hand_anchor_create(); renderer->hands.right = ar_hand_anchor_create(); ... }
-
17:25 - hand tracking call in render function
// Hand tracking // Polling for hands void render(struct Renderer *renderer, ... ) { ... ar_hand_tracking_provider_get_latest_anchors(renderer->hand_tracking, renderer->hands.left, renderer->hands.right); if (ar_trackable_anchor_is_tracked(renderer->hands.left)) { const simd_float4x4 origin_from_wrist = ar_anchor_get_origin_from_anchor_transform(renderer->hands.left); ... } ... }
-
18:00 - Demo app TimeForCube
// App @main struct TimeForCube: App { @StateObject var model = TimeForCubeViewModel() var body: some SwiftUI.Scene { ImmersiveSpace { RealityView { content in content.add(model.setupContentEntity()) } .task { await model.runSession() } .task { await model.processHandUpdates() } .task { await model.processReconstructionUpdates() } .gesture(SpatialTapGesture().targetedToAnyEntity().onEnded({ value in let location3D = value.convert(value.location3D, from: .global, to: .scene) model.addCube(tapLocation: location3D) })) } } }
-
18:50 - Demo app View Model
// View model @MainActor class TimeForCubeViewModel: ObservableObject { private let session = ARKitSession() private let handTracking = HandTrackingProvider() private let sceneReconstruction = SceneReconstructionProvider() private var contentEntity = Entity() private var meshEntities = [UUID: ModelEntity]() private let fingerEntities: [HandAnchor.Chirality: ModelEntity] = [ .left: .createFingertip(), .right: .createFingertip() ] func setupContentEntity() { ... } func runSession() async { ... } func processHandUpdates() async { ... } func processReconstructionUpdates() async { ... } func addCube(tapLocation: SIMD3<Float>) { ... } }
-
20:00 - function HandTrackingProvider
class TimeForCubeViewModel: ObservableObject { ... private let fingerEntities: [HandAnchor.Chirality: ModelEntity] = [ .left: .createFingertip(), .right: .createFingertip() ] ... func processHandUpdates() async { for await update in handTracking.anchorUpdates { let handAnchor = update.anchor guard handAnchor.isTracked else { continue } let fingertip = handAnchor.skeleton.joint(named: .handIndexFingerTip) guard fingertip.isTracked else { continue } let originFromWrist = handAnchor.transform let wristFromIndex = fingertip.rootTransform let originFromIndex = originFromWrist * wristFromIndex fingerEntities[handAnchor.chirality]?.setTransformMatrix(originFromIndex, relativeTo: nil) }
-
21:20 - function SceneReconstruction
func processReconstructionUpdates() async { for await update in sceneReconstruction.anchorUpdates { let meshAnchor = update.anchor guard let shape = try? await ShapeResource.generateStaticMesh(from: meshAnchor) else { continue } switch update.event { case .added: let entity = ModelEntity() entity.transform = Transform(matrix: meshAnchor.transform) entity.collision = CollisionComponent(shapes: [shape], isStatic: true) entity.physicsBody = PhysicsBodyComponent() entity.components.set(InputTargetComponent()) meshEntities[meshAnchor.id] = entity contentEntity.addChild(entity) case .updated: guard let entity = meshEntities[meshAnchor.id] else { fatalError("...") } entity.transform = Transform(matrix: meshAnchor.transform) entity.collision?.shapes = [shape] case .removed: meshEntities[meshAnchor.id]?.removeFromParent() meshEntities.removeValue(forKey: meshAnchor.id) @unknown default: fatalError("Unsupported anchor event") } } }
-
22:20 - add cube at tap location
class TimeForCubeViewModel: ObservableObject { func addCube(tapLocation: SIMD3<Float>) { let placementLocation = tapLocation + SIMD3<Float>(0, 0.2, 0) let entity = ModelEntity( mesh: .generateBox(size: 0.1, cornerRadius: 0.0), materials: [SimpleMaterial(color: .systemPink, isMetallic: false)], collisionShape: .generateBox(size: SIMD3<Float>(repeating: 0.1)), mass: 1.0) entity.setPosition(placementLocation, relativeTo: nil) entity.components.set(InputTargetComponent(allowedInputTypes: .indirect)) let material = PhysicsMaterialResource.generate(friction: 0.8, restitution: 0.0) entity.components.set(PhysicsBodyComponent(shapes: entity.collision!.shapes, mass: 1.0, material: material, mode: .dynamic)) contentEntity.addChild(entity) } }
-
-
Looking for something specific? Enter a topic above and jump straight to the good stuff.
An error occurred when submitting your query. Please check your Internet connection and try again.