Streaming is available in most browsers,
and in the Developer app.
-
What’s new in Core Motion
Learn how you can use the latest Core Motion updates to expand how your app uses motion data. Discover how to stream higher-frequency sensor data when recording a HealthKit workout on Apple Watch. We'll show you how you can get submersion data — including water depth and temperature — during water-based activities like snorkeling. Find out how to stream motion data like attitude, user acceleration, and rotation rate from audio devices like AirPods to connected devices like iPhone and Mac.
Resources
Related Videos
WWDC23
WWDC20
-
Download
♪ ♪ Erin: Hi! My name is Erin, and I'm an Engineer on the CoreMotion team. I'm excited to tell you about some cool updates to CoreMotion. CoreMotion serves as a central framework to access motion data from inertial sensors. As our hardware has advanced, so has our ability to capture motion information. Crash detection, fall detection, and spatial audio are just some of the features that rely on improved sensing capabilities. With CoreMotion, you can take advantage of these improvements in your own apps too. In this session, I'll focus on some of the newer ways you can interact with motion data, but before I get to what's new, I'd like to give you a quick reminder of the sensors that generate motion data.
Capturing the way a device moves is central to how we experience them. Many of Apple's devices use built-in sensors to help create a notion of their movement through space. Take Apple Watch for example. Its built-in sensors include an accelerometer, which measures acceleration, a gyroscope, which measures rotation, a magnetometer, which measures the magnetic field, and a barometer, which measures pressure. Together, they help track how the device moves and orients in space.
Generating an idea of a device's movement is fundamental to many of the features we enjoy. These include things like tracking the number of steps you've taken that day and how many calories you've burned during a workout. It supports experiences that rely on the orientation of the device, like with a stargazing app to explore the stars in our sky. Features that keep us safe by detecting when you've been in a car crash or when you've fallen, also rely on tracking movement using those same sensors. These are just some of the many applications that are possible, and we always look forward to seeing how you leverage CoreMotion.
Now that I've given you a brief overview of some of the sensors that are involved, I'll go over getting motion data from audio products like AirPods, getting water submersion data updates, and finally, a new way to stream higher rate sensor data. Let's get started with headphone motion. Not too long ago, spatial audio with dynamic head tracking changed the way we experience music and movies. Dynamic head tracking relies on the same device motion algorithms that live on iPhone and Apple Watch. When CMHeadphoneMotionManager was introduced a couple years ago, the same data that made dynamic head tracking possible was made available to you. By streaming attitude, user acceleration, and rotation rate data to a connected iOS or iPadOS device, you could track the way your head moves. Head tracking unlocked a lot of cool features, from gaming to fitness applications. And now, this year, CMHeadphoneMotionManager is coming to macOS. Let's go through some details.
CMHeadphoneMotionManager was first made available in iOS and iPadOS 14. And starting this year, it's also coming to macOS 14. You can use CMHeadphoneMotionManager to stream device motion from audio products that support spatial audio with dynamic head tracking, like AirPods Pro, to a connected iOS, iPadOS, or macOS device.
Inspect CMDeviceMotion for attitude, user acceleration, and rotation rate data during streaming from supported devices, just like you would on iPhone and Apple Watch.
Note additional information specific to CMHeadphoneMotionManager, like SensorLocation, that help disambiguate where the source of the data is, whether it's from the left or right bud.
Because data is streamed from a remote device, it's important to understand when it's connected.
CMHeadphoneMotionManagerDelegate makes it easy to listen for connection state updates.
Let me show you how to use it.
Adopt the CMHeadphoneMotionManagerDelegate protocol to respond to connection state updates. Data is available when the audio device is connected to a supported streaming device, like iPhone, iPad, or Mac. If Automatic Ear Detection is enabled, you'll receive events that impact head tracking too. You'll get a disconnect event when the buds are taken out of ear, and a connect event when they're put back in.
Similarly, if automatic head detection is enabled, putting on and taking off over ear headphones will trigger these events. Setting up CMHeadphoneMotionManager to listen for these events and stream data is easy. Let me show you how.
Before you start streaming, you'll want to make sure that device motion data is available, which can be checked using the isDeviceMotionAvailable property. Assign a delegate to receive the connection events I spoke about earlier. Then, start streaming data. CMHeadphoneMotionManager exposes both a push and a pull interface to grab data. In this example, we'll use the push interface. Use startDeviceMotionUpdates and specify an operation queue and handler.
Because you're accessing motion data, authorization is important. Users of your app are prompted to authorize your app for motion data using the Motion Usage Description key you add to your Info.plist. You can check the authorizationStatus property to confirm whether you've been authorized for motion data, and provide a seamless experience regardless of permission level.
Once you're authorized and data starts streaming, tracking head pose is easy using the attitude information provided with each device motion update. For example, we can keep track of a reference attitude as startingPose and use the multiply method to conveniently obtain the current sample's relative change to that original pose. Along with the attitude, user acceleration, and rotation rate data, each device motion update contains sensor location information. This is important because motion data is delivered to you from one bud at a time. The SensorLocation enum delivered with each sample lets you identify which bud sourced the data. The bud streaming the data can be impacted by a number of things, including in-ear state if Automatic Ear Detection is enabled. For example, if data was streaming from my right bud but I take it out of my ear with Automatic Ear Detection enabled, then my left bud will take over the data stream. This allows for a more seamless head tracking experience. Convenient head tracking opened the door to a lot of different experiences. Things like counting the number of pushups you did or monitoring posture were made easier than ever. And now, with macOS support, you're able to stream motion data from head-tracking-enabled audio products to an even wider range of devices. We're excited to see what you build using CMHeadphoneMotionManager. Now, you don't need to measure pressure to track how your head moves, but something else does. I'm going to talk about ways you can interact with water-based activities using some cool updates to CMWaterSubmersionManager.
During water-based activities like snorkeling or swimming, there are a lot of things that are interesting to know about the water and your submersion state. You're probably interested in how deep you are and what the water temperature is. It's also useful to know when you're submerged or if you've exited the water back onto shore or a boat, and what the surface air pressure during your activity is. Using the built-in barometer, CMWaterSubmersionManager can track these metrics for you during your water-based activities. Let me give you some details.
CMWaterSubmersionManager is available on Apple Watch Ultra running watchOS 9. Use CMWaterSubmersionManagerDelegate to listen for depth, temperature, and submersion state data. Make sure to add the Shallow Depth and Pressure capability to your app and get a seamless experience when users of your app start their water-based activities by configuring Auto Launch settings. Let me show you how to get started using CMWaterSubmersionManager. To start tracking water submersion state, set up CMWaterSubmersionManager after checking availability. Then, assign a delegate to start receiving updates about submersion state and events. Let's talk a bit about how to receive those updates. Getting updates is simple using CMWaterSubmersionManagerDelegate. There are a number of different types of updates that you can receive. Updates to submersion state, like when you enter and exit the water, are delivered using the didUpdate method with CMWaterSubmersionEvent. You'll receive an errorOccurred update when there are issues, like if you tried to receive updates when your app is missing the entitlement, or is on an unsupported platform. Water temperature updates are delivered using CMWaterTemperature. They come with a notion of uncertainty, as it takes a couple seconds for the watch temperature to equalize with the water. So, when you first become submerged, uncertainty will be higher and then begin to converge once you've spent more time in the water. Note that water temperature is only available when submerged.
You'll receive depth, pressure, surface pressure, and submersion state updates with CMWaterSubmersionMeasurement. During your submerged activity, measurements are delivered to your app at regular intervals. Note that some of this data, like depth, are only applicable when you're in the submerged state, so these are optional.
Depth under water corresponds to certain depth states. Let me show you how they're mapped.
Let's start with the state out of water, when you're in the notSubmerged state. Above 1 meter under water, you're in the submergedShallow state. Beyond 1 meter, you're in the submergedDeep state. With the Shallow Depth and Pressure capability, it's easy to ensure users of your app stay within depth zones that minimize the risk of decompression sickness.
It keeps the maximum depth at 6 meters, and lets you know when you're close to that depth. You can use the maximum depth property to check the depth being monitored for. As you approach 6 meters, you'll enter the approachingMaxDepth state. Beyond 6 meters, you're in the pastMaxDepth state. Data is vended down to 6 meters, plus some uncertainty in the pastMaxDepth state. Beyond that, you're in the sensorDepthError state. By dividing depth into zones, CMWaterSubmersionManager makes it easy to monitor for changes in depth with a focus on safety and sensor limits. If you're interested in use cases beyond the 6 meter depth maximum, you can check out the documentation for more information on the managed entitlement. Whichever way you choose, creating great experiences for water sports is easier than ever with CMWaterSubmersionManager.
There's are a lot of sports out of water too though, and I'm excited to share a way to consume high rate motion data during these activities using CMBatchedSensorManager. Let's first start off with some background.
I've talked through some of the ways motion data is delivered to you. Device motion algorithms fuse data from the built-in accelerometer and gyroscope to provide an easy way to track the way a device, like Apple Watch, is moving through space. You may be familiar with CMMotionManager, which delivers these samples on a per-sample basis to your app in real time. The maximum supported frequency is 100 Hz. This means it's a great choice if you have low latency requirements, such as UI components that rely on the instantaneous attitude of the device.
Now, how does this compare with the way we deliver high rate data using the new CMBatchedSensorManager? CMBatchedSensorManager provides batches of sensor data on a fixed schedule, delivering a batch of data per second. This means we're able to deliver higher rate data at a lower overhead to your app. That's 800 Hz accelerometer and 200 Hz device motion, compared to 100 Hz with the existing CMMotionManager. Now, you can access some of the same data streams that power the features that keep us safe, like fall and crash detection.
Because data is batched, there are some things to consider when thinking about using CMBatchedSensorManager. If your app has workout-centric features that can benefit from high rate data, but without very tight latency requirements, then CMBatchedSensorManager is well suited.
I went through how higher rate sensor data is delivered to you, and how it compares to what's provided by some of our existing interfaces. Let me show you some of the ways it can be used.
Many sports are centered around short-duration impact-based events. This includes activities like golfing, tennis, and baseball, to name just a few examples. In these, capturing more information during that swing movement might be critical to evaluating form and improving your game. This is where capturing higher rate sensor data comes into play. We'll benefit by anchoring this in a concrete example. Let's focus on a baseball swing.
A swing has a couple different phases. In this figure, we can see pre-swing setup, the actual swing, then the post-impact follow-through. An important metric for swing quality is time to contact. In other words, how much time elapses between when the batter starts the swing of the bat, and the time when the bat hits the ball. Using high rate sensor data, we can divide this into three steps. On the batter's wrist, we can see Apple Watch with the x, y, and z directions. We can imagine the path of the wrist as it moves around the batter in blue, with the gravity vector pointing down. To compute time to contact, I'll first detect the point of impact between the bat and the ball using 800 Hz accelerometer. Then, I'll identify the start of the swing using rotation along gravity with 200 Hz device motion. Finally, we can compute the difference between these timestamps, from the start of the swing to impact, called time to contact. Let's start by visualizing the sensor data during the swing to get a sense of what we're looking for.
Here, I've plotted the accelerometer data in the z direction for a window of one second containing one swing. I can see that between 0.5 and 0.6 seconds there's a burst of activity. Our algorithm to detect the point of impact will center around this observation. Let's compare the amount of signal information available during the swing, with 800 Hz accelerometer on top and 100 Hz accelerometer on bottom. In that section of interest, between 0.5 and 0.6 seconds, we now have 80 data points instead of 10, giving us a much finer grained picture of what's going on. This helps us zero in on the things we're interested in, like impact. Now, let's look at the same swing from the device motion perspective. This plots rotation rate along gravity at 200 Hz. I can see the swing start by the way rotation rate begins to change at around the 0.3 second mark. Let's put these together. With these plots aligned by time, I can get a feel for how the information in 800 Hz accelerometer and 200 Hz device motion helps me compute time to contact. Now that I have a good idea of how the swing shows up in the sensor streams, I can use CMBatchedSensorManager to start streaming and processing data.
First, we want to confirm the data is available on this platform. You can do so by checking isAccelerometerSupported.
You can check for device motion support using a similar property. Apple Watch Series 8 and Ultra support both high rate accelerometer and device motion.
Because this a workout-centric API, you need to have an active HealthKit workout session to get data. Once you're in a HealthKit Workout session, you can start receiving updates. With Swift async support, it's easy to receive batches of sensor data and process each batch. Make sure you evaluate for conditions to exit the loop; for example, if the workout has ended.
Remember that errors are surfaced if we aren't authorized for motion data or on an unsupported platform. Now, let's zoom in on what I want to do with each batch of data.
For each batch, I'll call a feed function to run my algorithm on the data. When the bat makes contact with the ball, I expect to see a pronounced response in the z direction. Remember, the z axis is perpendicular to the watch crown. This reverberation is a good approximation of the impact. I'll process each accelerometer batch with this in mind.
First, I want to better isolate the higher frequency response as the force is displaced upon contact. To do this, I'll filter the z samples as fz. Using the filtered data, I'll approximate the point of impact as the peak of the filtered signal. Let's keep track of the index associated with that sample as impactIndex. Using the impactIndex drawn from the filtered signal, I can grab the impact timestamp from the original batch of data. This brings me to the second step on the way to compute time to contact: detecting the start of the swing using rotation along gravity.
Imagine the path of Apple Watch as the batter takes a swing; it'll follow a path around the body to meet the ball. So I expect to see non zero rotation rate along gravity during the swing, and close to zero rotation rate along gravity outside of the swing. By streaming high frequency device motion into a local buffer, I can use the rotation rate data to identify where the swing started.
Let's take a closer look at what my compute function looks like. I'll iterate through my local buffer backwards to investigate around the point of interest: the impact timestamp. Since I know the start of the swing must precede the impact, I can run a series of shouldProcess checks before processing each sample in the buffer. This can include a timestamp check to confirm the device motion sample is from before the impact time. I can also put bounds on the swing duration. It makes sense that the start of the swing doesn't occur more than a certain duration before it makes contact with the ball.
For samples that pass my series of initial checks, I compute the rotation along gravity in my computeRotation function, which sums the product of each axis's rotation rate and gravity values. With rotation along gravity computed, I can start looking for the start of the swing. A simple swing start check might look for the consistent failure of the rotation rate along gravity to meet a threshold value. Once I stop seeing rotation rate along gravity meet that threshold, I'll use that as the start time of the swing and exit out of the loop. As a final check, I'll validate the swing we detected. Here, I can look at the accumulated rotation along gravity during the swing and confirm it's within expected thresholds. And with that, I can return the start timestamp I detected. This brings me to the final step, step 3, where I now have everything I need to compute time to contact. Let's go back to our initial feed function.
Using CMBatchedSensorManager, I started by streaming accelerometer and device motion data. I detected impactTime by using the filtered accelerometer data in the z direction. Then, I identified the start of the swing near that impact timestamp by inspecting rotation rate along gravity. I'll take the difference between the two timestamps to compute time to contact.
This was a simple example of how you can develop features based on sensor data. The extra signal information in the higher frequency data streams opens the door to a lot of other investigations too. Let's take a look.
We saw this accelerometer data trace earlier with the 800 Hz accelerometer stream in the z direction. Now, take a look at the second plot. It looks pretty similar, but it's not quite the same. This is a trace from a missed swing, where the bat did not actually make contact with the ball. Even though the swing motion itself is similar for both, you can see how the high rate data streams give us extra insight into differences like this. The algorithms you develop can capitalize on these differences to detect things that weren't possible before.
To summarize, the same device motion algorithms provide data to you in a couple different ways.
CMMotionManager vends data at a maximum of 100 Hz on a per-sample basis. This makes it a great choice if you have low latency requirements on a sub-second scale, or have motion-based features outside of workouts. The new CMBatchedSensorManager delivers data at higher rates, with caps of 200 Hz device motion and 800 Hz accelerometer on a batched schedule, delivering a batch of data per second. This makes it useful for workout-centric features that can benefit from high-rate data. It's available on Apple Watch Series 8 and Ultra. Even though I focused on a baseball swing to use CMBatchedSensorManager, these higher rate data streams can provide valuable insight into the motion of Apple Watch for all workouts, particularly during short duration or impact-based activities.
That was CMBatchedSensorManager, and this concludes my review of what's new with CoreMotion. We have great ways to interact with motion, whether it's on headphones or Apple Watch.
There are a ton of cool ways to use motion data, and I've covered just a few examples. I encourage you to try them out, and check out the documentation for more information. Make sure to give us feedback too. For an example of how motion data is translated into health-based features, like measuring mobility, check out the "Beyond Counting Steps" session from WWDC 2020. For more information on running workouts to take advantage of CMBatchedSensorManager, take a look at "Building custom workouts with WorkoutKit." We're so excited to see how you use motion data to create amazing new experiences. Thank you for watching.
-
-
Looking for something specific? Enter a topic above and jump straight to the good stuff.
An error occurred when submitting your query. Please check your Internet connection and try again.