Streaming is available in most browsers,
and in the Developer app.
-
Platforms State of the Union
WWDC 2019 Platforms State of the Union
Resources
Related Videos
WWDC19
-
Download
Good afternoon, ladies and gentlemen. Please welcome Vice President of Software Sebastien Marineau-Mes.
Good afternoon, everyone, and welcome to WWDC. Did we love the morning keynote? There you go. Now this year is one of our biggest for developers and we're really excited to show you what we've been working on and see what you think. This morning's keynote was just a taste of what's happening this year. There is so much more that we want to share, and this afternoon we're going to focus on the areas that matter most to you as developers. Are you ready to hear more? There we go.
Now we've taken a huge step in terms of developer experience this year with the new SwiftUI framework as well as great interactive tools in Xcode.
And we've really seen each of our platforms get even deeper at what they do best. We have powerful new pro capabilities for Mac, and new Dark Mode and rich updates -- sorry, independence for watchOS.
On iOS, a new Dark Mode and great app updates.
And finally, on iPadOS, a powerful operating system that now stands on its own. Sorry, the monitors aren't working down here.
There we go. One of those is working, so I will go over to the left. Now these platforms represent a diverse range of devices and building great support for them is easy, thanks to a choice of many tools and APIs like AutoLayout, Size Classes and SwiftUI.
So no more letterbox. Your users get the best experience when your app works well on a wide range of device sizes. And starting next spring, it will be an app store requirement to deliver UI that adapts to different screen sizes. Now tvOS is offering cool new capabilities -- There we go.
Now tvOS is offering cool new capabilities for developers this year, including multi-user support for third-party apps, new UI elements and options, SwiftUI and of course support for Xbox and PlayStation game controllers. Now this morning we announced an incredible new hardware platform with the new Mac Pro. Do we love it? It is incredible, and it will really unlock amazing new kinds of apps.
And we also built technologies that span all of our platforms, and we'll have a look at a few of these areas today, including accessibility, privacy, machine learning, Siri, augmented reality and finally Metal.
Now we want to focus on these three big areas this afternoon, and we're going to start with developer productivity.
Now everyone in this room knows that great tools can dramatically improve your productivity. Great tools give you more time to be creative and they let you build better apps.
And the foundation of that experience is the programming language. Over the last five years, Swift has matured and is built into every Apple platform. And Swift gives us the foundation for SwiftUI.
And Xcode is so much more than a code debugger -- sorry, code editor and debugger. It includes everything that you need to build an app, including support for continuous integration and testing. And it gives you tools that let you explore new technologies such as machine learning and augmented reality.
And finally, built on the strong foundation of our platforms, the SwiftUI framework will revolutionize how you build user interfaces. And together, these three elements deliver a whole new level of productivity and they will fundamentally transform how all of you build apps.
Now, are you ready to dive right into SwiftUI? Let's have Josh come up and tell us more. Josh? Thanks, Sebastien. Okay, so SwiftUI, as you saw this morning, is a brand-new user interface framework built from the ground up in Swift for Swift.
We designed it to let you write less code and have the code that you do write be better code, while letting you use more of that code across all Apple platforms.
Now first of all, there's so much functionality built into each individual line that you write that you're just going to write way less code.
So let's take the app for choosing macOS release names that we looked at this morning, but without the animated transitions.
If you've written an app with UIKit before, you know the types of pieces that you need in order to build this interface. It's not a lot of views, but there are many individual details that you need to get right.
With SwiftUI, it is way less code. Fewer than 20 lines focused on just three key things.
First, a few lines to define the structure and layout of your views. Then some image and text views to display your content.
And finally, parameters and modifiers to adjust how it all looks.
Now let' stake a look a little bit more closely at just a few of these lines.
The scrolling list itself is barely any code at all. You just declare the list and then describe the model objects to be used in each row. There's no setup, there's no configuration and there's no callbacks. Now the image at the top is just as simple. You display an image, clip it to a circle and add a shadow. And it's not just less code. It's better code. We designed the API to make the obvious approach be the right approach.
The right way to create this label is exactly the one line of code that you would think to write. It supports dynamic type, Dark Mode and more. In fact, even the string interpolation used here is fully localizable.
This simplicity eliminates entire categories of errors.
Looking at our list again, its rows update automatically if the model changes, ensuring that your UI is always up to date and never displaying out-of-date data. And it's easy to read too. The code for this image with a corner radius of three says exactly that.
Reading SwiftUI is like having someone explain that interface to you. And SwiftUi's code is available everywhere, helping you reuse more of your code across all Apple platforms. Now you've long been able to share your model and low-level drawing and compositing code, but higher-level UI code has remained mostly platform-specific. With SwiftUI, we're raising that bar, enabling you to share far more. Now of course you're still going to want to tailor your interfaces for each individual platform to ensure that your app feels great everywhere that you deploy it. But with SwiftUI's common set of API patterns, you can learn those tools once and then apply them everywhere, getting you a native interface on each platform you deploy to.
SwiftUI is designed with a strong set of four core principles. First, a declarative syntax shifts UI programming away from how to update the screen and instead lets you just focus on what you want to display.
For example, let's say that you want to build a label with a headline font and a gray color. Describing how to get that is a multi-step process with many steps having to happen in a specific order. But describing what you want requires no translation. Text that says Done with a font for a headline that's colored gray. SwiftUI lets you say exactly that with a great new declarative syntax. And it's the minimum code necessary to describe exactly your idea. And iteration becomes significantly faster as well. If you later need to change that label to become a button, that's a one-line change. I know, it's pretty great.
All right, so our second principle is that we should provide automatic functionality whenever it's possible. This eliminates the need for a huge amount of code that you used to have to write by hand.
Our app for naming macOS releases is pretty simple, yet it does include a huge amount of automatic functionality. We get default handling of spacing and safe area insets. Localizability and layout adjustments for right-to-left languages. Dynamic type, Dark Mode and so much more, all from that one minimal description. It's an incredible amount of automatic functionality for a small amount of code, but there is one more thing that is just so important that it really deserves special attention.
Our modern interfaces are interactive and they're animated. And with SwiftUI, that same interface declaration is also automatically fully animatable.
Animations can be enabled for the entire hierarchy with just one line of code. There's no bookkeeping, there's no preparation and there's never any cleanup. If you've used Keynote Magic Move animations before, SwiftUI animations are that easy and even more powerful.
And for views that are added and removed, you can specify how they transition in and out with just one more line of code. While animations are in progress, your app remains fully interactive and responsive and ready to handle user input at any time. And if the user ever interrupts one of those animations or if you need to move to a new location, SwiftUI handles all of that automatically too.
Now our third principle is that compositional APIs are easier to learn and let you iterate a lot faster.
We've looked at how we can declare an individual view like this text label. But declaring more complex views is just as easy. You just compose together multiple smaller pieces.
Containers like horizontal and vertical stacks let you easily build powerful layouts by just combining together multiple simple pieces.
And SwiftUI applies composition to view properties as well, using a standard modifier syntax. A common set of modifiers can be applied to any view, like color applied here to make this text gray.
This compositional approach lets you learn a small set of views and modifiers and then combine them together to create progressively more powerful interfaces.
And our final principle is that your interface should always be in a consistent state.
Your UI is a reflection of your app's data, so the two should always be in synch at all times. With traditional APIs, this can be error-prone. But with SwiftUI, your interface updates automatically any time the data changes.
Now there are two common places that your data might likely come from. Now the first are your model objects. And you can use your existing model objects directly by simply conforming them to a new bindable object protocol. Its only requirement is that you specify when the data in your model changes. Now the second place is temporary UI state, like whether the view is currently in editing mode. These are declared using a simple state wrapper applied to any property on your view.
We're all used to every property on every view being mutable, but once you start using SwiftUI, you're going to be shocked to realize how little mutable state your app actually needs. Now whenever your model or state changes, that UI is going to update automatically. And because it's all Swift code, you get this behavior while still being able to use your model objects directly within that interface declaration.
You can even transform and format values in line with no additional indirection needed.
For example, this string interpolation can be used to format a date, resulting in fully localized formatted text. All of this means that with Swift UI, you're going to write less code and get a more consistent UI.
Those are the four core principles of SwiftUI. A powerful declarative syntax enabling a huge amount of automatic functionality with a compositional API that ensures your interface is always in a consistent state.
And a great new framework deserves incredible tools. And we've created a whole brand-new workflow within Xcode designed from the ground up for SwiftUI.
You get the power and flexibility of code combined with the ease of use and rapid iteration of a UI tool.
You always have access to both at all times, so you'll never have to choose between them again.
And because the tools work on your actual existing source code, you have a truly live development experience.
Now to really understand how amazing this workflow is, you just need to see it again live. And to show it to you now, I'll invite Kevin up to give you a demo. Thanks, Josh.
You guys ready to have some fun? All right, so I'm building a hiking app and I want to add a view to my table view cell that tells me how difficult a trail is. So we're going to start in the library. We're going to have some text. And as I'm dragging, Xcode is suggesting layouts for me. Now I just tell Xcode where I want it and Xcode figures out the layout for me.
And now we can edit the properties of this view. So I'm just going to command click in the canvas here and get custom-tailored inspectors right here. Let's make this text a little bit smaller. Now watch -- watch the code while I do this. You'll see it writes the code for me.
Now, we can do the same thing over here in the source editor by just editing the code. And you can see Xcode builds and runs my code and updates the canvas on the right.
Now no matter where I'm working, I get access to all of my design tools. So I'm just going to command click on this V stack, going to open up the inspectors. And again I can just modify the properties that I want. It's just so fast to iterate. Now, you might notice that this view has a couple inputs like this title and this difficulty. So how does Xcode know what data to show in the preview? This has always been one of the challenges with UI development: what data do we show during design time? And this is why we invented Xcode previews. What is a preview? I'll show you. So let me scroll down here.
This snippet of code here -- a preview is just a snippet of code in my application that configures it for design time. Because it's in my application, I get access to all the code in my project. And because it's in my project, I can check it in and share it with my team members. And it's so easy to try different data. Now actually here Half Dome is pretty hard. So let's see what it looks like at hard. And that's because it's 16 miles, not 6. And that's really compiling code.
Now because this is SwiftUI code, I get access to all the modifiers that I would use for the rest of my UI development. So we can see what it looks like in Dark Mode.
And I also have preview-specific modifiers as well. So by default previews are the size of a device, but since we're working on a table view cell, let's just focus in on that content. So I'll just make that the size that fits. Okay, now here is the really cool thing about previews. You can have as many as you want. So let's add a second preview with completely different data. But let's not stop there. Let's command click, repeat it a couple of times. Let's enumerate over some common dynamic type sizes and then let's configure our cell to use that dynamic type size. And there at a glance I can see my cell with light mode, Dark Mode, multiple different dynamic type sizes all at the same time. Now when I tap on this cell, I want to go to a detailed view. So let's switch over to that file now and take a look. Now I've learned through the years of hiking that you should never judge a trail by its name. So it's really important to me that we can configure our detail view to make that image really large. And I've already done that with some SwiftUI state here. So when we tap the banner, we want to toggle that expansion state. Now I can test that right here in the UI just by clicking this play button. This takes any preview that I have and makes it fully interactive.
So I can just click and test out those different expansion states. Now we can really polish this up with an animation. It's so easy. So I can just wrap my state change in a whip animation block, and now I get a beautiful default animation.
And it's just as easy to customize that. So let's slow it down for some dramatic effect, and now I get a beautiful animation opening it up. Now what's so cool about SwiftUI is that every animation is cancellable and reversible, leaving my application responsive the whole time.
Okay, so we have a table view cell and we have a detail view. So let's put it all together. So I'm going to switch over to my last value here, which has a bunch of different trails and a list.
So what I want to do is I actually want to see what this looks like on a real device. So with the click of a button -- let's click this -- Xcode is going to build my project for the device. It's going to install it and it's going to launch my preview right here on the device. And you can see it's fully interactive right here. So first let's use that cell that we built. So I'll just change this text to be a trail cell. Now you can see we're seeing our trails show up there. And now when I tap on this, I want it to go to our detail view that we built. This is so easy with SwiftUI. It's just going to wrap this in a navigation button, and that tells us to go to our detailed view. And now you can see the chevron shows up. So let's check out Snow Creek and let's move in on that picture. And okay, snowy and difficult. That does not sound like a fun trail. So what I'm going to do with just one line of SwiftUI code is add swipe to delete. And now we can say, "We don't want to do that trail." And now lastly, let's see what it looks like in Dark Mode. Without any work, it's going to put my preview in Dark Mode and you can see it looks beautiful. We can tap on our valley floor, zoom in. And that looks like a great way to end the week. So we just built an app with navigation, dynamic type size, light mode, Dark Mode, multiple different data and saw on the real device all without ever building and running. Now that's fun. All right, Josh, back to you. Thanks, Kevin. All right, this is really an incredible new workflow for fully native code. What you do in the tool is always debuggable, diffable, searchable, and understandable. And because you can always edit the code directly, you get incredible flexibility in your workflow. And SwiftUI is deeply integrated in all of our operating systems, so using it results in a fully native app for whichever platforms you target. You get the same performance, the same behavior and the same controls as any other native app. And you can adopt SwiftUI at your own pace. You can use it for anything, from just one view in your app all the way up to the entire application. It works seamlessly with all of your existing UIKit, AppKit and WatchKit code so you don't need to rewrite anything. And to get you up to speed quickly, our documentation team has developed a brand-new style of interactive documentation. It quickly takes you step-by-step from creating a new project all the way up to building a fully interactive interface. So you'll be up to speed in no time. So that's SwiftUI and some of the new tools in Xcode. Of course, this is a huge year for Swift and Xcode, so there's even more to this story. And to tell you more about it, I'll hand things off to Matthew. Thanks.
Thank you, Josh. Our tools released this year bring together innovations in Swift and Xcode to deliver some awesome results.
So let's start with Swift.
Now in our fifth year, Swift has matured and is continuing to leap forward. Where our newest and flagship technologies, from machine learning to augmented reality are possible only because of Swift and it now being part of our OS.
To achieve this, earlier this spring we introduced ABI stability which reduces the size of your apps by using a single shared Swift runtime. And we are following that up today with module stability which ensures compatibility -- yes.
This completes the picture by ensuring compatibility for your binaries with the current and future versions of the Swift compiler.
And these come alongside a number of other language features, tools editions and performance and code size improvements, all which further extend the potential Swift brings to your projects.
So Swift is already the language for your apps and now more than ever for common code to share across all Apple platforms.
In fact, sharing is the reason we created Swift Packages which are the best way to develop and share your own code and reuse code from others. And today we have two big announcements.
GitHub will be adding support for Swift packages to the GitHub Package Registry. Which is perfect because Xcode now seamlessly supports Swift packages for apps on iOS, iPadOS and all of our platforms. Swift packages are top-level items in your workspaces, always visible, always accessible, and deeply integrated.
Packages from the community and those packages you create get instant access to all of Xcode's work flows for source control, debugging, testing, the works.
So Swift packages built into Xcode, it's sharing code the way you've always wanted.
Now that's just the start of our Xcode release this year, which is focused on maximizing your productivity. And we have a number of improvements today to share with you as we take Xcode up to 11.
So let's get started with one of our biggest changes which is the Xcode Workspace. We are giving you full editorial control. You can now create and manage editors however you like.
Whatever your preferred style and layout is, you can simply add and remove editors whenever and wherever you see fit.
And even better, your workspaces can now focus too. So you can take any editor and maximize it, and then when you're done, just put it back and it will go right back to where you started. So whether you're working on the smallest of laptops or with the largest of displays, your workspace now works for you.
Now the related content in our editors, the smart selections like counterparts, are also getting a huge boost.
There are new options like previews, canvas, live views and more.
You can use the related content in any editor in your workspace.
And you'll like this one best of all. When there is no content, they automatically disappear so you no longer need to manager their visibility.
Now once you get your workspace set up, it's all about the editing, and I'd like to show you a quick demonstration of some of the new source editing features we have for you this year.
To help you configure each editor the way you like, there's a new Options menu in the upper right.
You can see here I can enable the assistants or any of the related content. I could turn on code coverage or source control authors. I'm going to turn on our newest feature, Mini map.
So the mini map gives you a structural overview of your file to help you navigate. You can see documentations, method and functions. It makes it really easy to move about the file.
If you'd like to leave yourself some other landmarks, you can use the mark syntax to add labels and horizontal dividers that show up in your source and in the mini map.
Now if I hover over the mini map, you'll see the symbolic landmarks for the file. Here's a pro tip: hold down the command key and you'll see all of the landmarks for the file to make it really easy to navigate to exactly where you want to go. And the mini map will show you issues, test failures, even in-file find results. And we've made it fully accessible.
You'll find our source editor now pops and your code is more vivid as we've deepened our syntax coloring. You'll also see that we've increased our documentation support here with italics, bold and code voice in the documentation.
You'll also see that when you add documentation, it automatically adds in missing parameters that you may have added after you typed your documentation. And what's even better is to help you keep your documentation and code in synch. You'll find Edit All and Scope now changes both, all at the same time. Now we also wanted to add some additional help to help you keep track of your changes. If I'd like to review all of the changes for this file, I can open up the new Source Control History Inspector which shows me all the changes that have been made to this file, and I can quickly jump to any commit. And because it's in the inspector, this now works for any file type in your project.
To help you review local changes, we've also improved the change bar. When I hover over the change bar, it shows me the local changes. But I can now have it show me the code before the change that I made to get a quick snippet. And of course it's live, so as I start typing, it will update to keep me up-to-date. So those are just some of the many source editing features you'll find in Xcode 11.
Okay, so testing is another key development workflow. And Xcode already has great support for writing tests, which of course you all already know because you're writing lots of them, right? Yep. Yeah.
Excellent. That's what we like to hear. Now what you may not know is that Xcode can do even more with your tests by using fantastic tools like runtime issues, sanitizers, localization simulation. And we add more of these every year.
With so many options, what's been missing is a way to combine them all in one place to be used in parallel. And for that we are adding test plans.
Now the power of test plans comes from running your tests in many configurations.
With a few simple selections, you can instantly test for your global audience.
And this configuration is also perfect for capturing screenshots for the App Store or collecting details for your localizers.
Yeah, it's okay to applaud for that. This is a big deal. You can then see your app from every angle by adding in other diagnostics, tools and parameters.
And your coverage increases even further when you run your test plans against many devices and OS combinations to get a fully comprehensive view of how your app is doing. Now for testing at this scale, test plans work perfectly with Xcode Server which can take full advantage of the new Mac Pro and with Xcode's new parallel testing on simulators and devices.
The result with test plans is you now have one command that does all of the testing for your apps. So this is a major advancement.
Now often when testing and debugging, it's necessary to replicate user scenarios. And our new Device Conditions answers the call.
You can now set varied conditions for network throughput and thermal states on your devices and see how your apps respond.
Now rest assured these are actually just simulations. We're not actually going to make your devices get super-hot here.
You can enable the conditions in Xcode's Devices window. And the devices will display banners when the conditions are active.
You can tap the banner to disable the conditions and Xcode will automatically terminate the conditions when you disconnect the devices.
Now for all the testing you're going to be doing, we've also improved our result bundles which are now standalone. Whether you create them in Xcode or from the command line, you can share them via email, attach them to bugs and then just double-click on them to open them back in Xcode to review all of the details. Now to help you improve your apps even further, we are introducing two new feedback tools.
First, app performance metrics for iOS and iPadOS App Store apps.
When users opt into sharing analytics, you'll received anonymized metrics for battery life, launch time, memory use and more.
These metrics are aggregated and displayed in the organizer alongside the crash and energy logs, and are a great way to monitor and improve the performance of your app with each build.
These aggregated metrics, we started collecting them in the spring with iOS 12.2. So many of your apps will already have data to review.
Now another great source of feedback is directly from your users, and Test Flight will now let users share what they think.
Test Flight apps automatically enable user feedback.
When a user takes a screenshot in your app, they will have a new option to share it as beta feedback, optionally adding in their comments. You can review all the feedback on App Store Connect and download all the details for your bug tracking systems.
So all these features today are just a small taste of our Xcode release, which brings together innovations in Swift, the SDK and across all of our tools. All this to help you do your best work faster than ever. And that is Xcode 11. And now I'd like to invite Sebastien back to tell us more about Apple's platforms. Sebastien? Thank you, Matthew. Wasn't that amazing? Really, really great features to help all of you build better apps. So now let's switch to our platforms. And of course our platforms themselves are tailored to provide great experiences and they really reflect the unique way in which each of them is used. So some of what we're doing this year is unique to each of them, and what we're going to do now is dive right in to macOS and tell you what we're doing there. macOS Catalina is a great release with a rich set of compelling new features such as screen time and the new Music app. And the Mac takes another great step forward with amazing productivity features such as Sidecar. We're going to love Sidecar, right? All right. Well, with an active installed base of over 100 million users, the Mac is a vibrant platform with a rich app ecosystem. And the Mac ecosystem is full of powerful native apps that you have created using our AppKit framework. And a great example of this is Pixelmator Pro.
Now AppKit is the powerful framework that enables the full capabilities of the Mac. But we also recognize that there are a number of apps available for iPad that would be great to run on the Mac, but you have not always had time to use AppKit to bring that to the Mac. And so this year we're adding an additional way to create native Mac apps with the technology that allows you to take an iPadOS app and bring it to the Mac with minimal effort.
We -- Can you go back two slides? Sorry.
One more. All right.
This is a huge opportunity for the Mac to tap into the world's largest app ecosystem. There are over a million iPad apps out there and we think many of them would be really great on the Mac as well.
Now to achieve this, we've ported more than 40 frameworks and libraries from iOS to the Mac. And if you're an existing iOS developer that doesn't have a Mac app yet, you're going to love having the same APIs available on both platforms. In fact, we've made available almost the entire iOS API set with only a small number of exceptions for unique mobile features.
Now we achieved this by adapting UI Kit as a native framework. That enables iPad apps to run on the Mac and feel just as fast and fluid as other apps on the platform.
And by integrating UI Kit directly into macOS, many of the fundamentals are automatic. So many Mac desktop and windowing features are added without any work on your part, and we adapt platform-unique elements like touch controls to keyboard and mouse input, saving you a ton of work and giving you a huge head start in your development. Now we've been working on this technology for a number of years and we're using it for our own apps, which has allowed us to prove out and refine the technology before we make it available to you in macOS Catalina this year.
If you have an iPadOS app, targeting the Mac is super easy. There are basically three steps.
First, click the checkbox in Xcode -- here we go. That easy. In Xcode's Project Editor, and turn on Mac support for your project. There you go. As easy as that. And here's the magic. That single project and target builds apps for all three platforms. And when you make a change to your source, all three apps update automatically.
The second step is to ensure that your app is great on the iPad.
Better iPad apps make better Mac apps as well. So the work that you put in to adopting the newest technologies and optimizing for larger iPad screens translates wonderfully to the Mac.
Just following best practices such as supporting external keyboards will also result in richer Mac experiences.
The third step is to take advantage of specific Mac capabilities.
And this is where you make customizations that take full advantage of typical Mac-specific user interface elements like full menus and toolbars. And if applicable, sidebars and their special materials. Now to show you how easy this is, I'd like to invite Matthew back onstage for a demo. Matthew? Thank you, Sebastien.
Here we have our travel application running in the iPad Simulator. It's a list view of locations. When I select a location, the globe will rotate. And we have a logging area where I can start keeping track of my trips in a journal.
Let's follow Sebastien's three steps and bring this app to the Mac.
Step one, check the box.
I'll quit the simulator and here in the target editor I'll check the box for Mac support to enable it.
That's it. I can now build and run my application for the Mac.
By checking the box, we added the Mac as a destination. So just like I can pick between devices and simulators for my app, I can now choose the Mac.
And here's the Mac app. List View on the left, select the location and log in.
I know, pretty powerful checkbox. All right, let's move on to step two, make a great iPad app.
I've not implemented any actions from my List View, things like Adding to Favorites or to Share.
When I implement those for the iPad, they'll show up as a context menu on the Mac. It's a double win.
So I'll quit the Mac app and change to my sidebar controller here, and I'll just add a table view delegate method that sets up those menus for each item.
Okay. Let's move on to step three.
I'd like the sidebar on my Mac app to be vibrant.
Now this change doesn't happen automatically because it's something you should review to make sure it's appropriate.
When you do find it's what you'd like, it's a simple one-line change to set the background style to sidebar.
Okay, for our final change, I'd like to add a menu bar to our application.
So here in the storyboard I'll bring up the library and I'll search for a menu.
I'll grab a main menu and I'll drag it out into my storyboard and we'll open up the file menu.
I'd like to add a menu command in here for the login action. So we'll call this Login. We'll give it a key equivalent of Command-L. And I just now need to connect the menu item up to the action that I'm already using for Login.
Okay? That's it. Let's build and run our changes.
I'm going to go up and hide Xcode for the moment so we can see our application.
Okay, so now we have the vibrant sidebar. When I select an item, I can bring up a context menu and up here in the File menu I now have the Login action.
So just like that, three easy steps. Three easy steps to bring our app to the Mac and make a great user experience for all our users. Back to you, Sebastien.
Thank you, Matthew. That was really amazing. Doesn't this make you want to go and try it out? Yes. All right, in fact, over the last few weeks we invited a number of developers to take this for a spin. And the progress that they have made in a few short weeks is truly impressive. Here's a sample of the iPad apps that they already have running on the Mac.
Now once you've built a Mac app, the best way to distribute it to your users is through the Mac App Store.
It features the biggest catalog of Mac Apps. It's available in 155 countries throughout the world and the Mac App Store allows you to reach every single Mac user. Now we also built Gatekeeper to give users flexibility and choice on how they get their apps while helping protect them from malicious software. And in macOS Catalina, Gatekeeper will validate the apps that you run from the internet both at first launch and periodically thereafter to confirm that they're free of known malware.
This is accomplished by requiring developers to use the notarization service that we announced last year for both new and updated apps. So now you and your users can safely get apps from both the Mac App Store and the internet.
Notarization has already seen broad adoption. It's simple and fast with over 98% of submissions completing within 15 minutes.
Now speaking of security, we're continuing to invest in the foundations of macOS and I'd like to focus on three areas.
First, a new technology called Driver Kit which allows you to move your kernel extensions out of the kernel and into user space. And by running these drivers and extensions as user processes, we improve the stability of macOS for all of our users.
We identified the most common use cases that have required kernel extensions in the past, and now we have a user space alternative for over 75% of them in macOS Catalina.
We encourage you to adopt Driver Kit as future versions of macOS will no longer run these types of kernel extensions. Next, we're improving the stability of macOS by making the system volume read-only.
Here's how it works. Today there's a single volume that includes user data, apps and the operating system. And to further isolate macOS from changes, the Mac will now be divided into two logical volumes.
One for the operating system files which will be read-only, and the other for user data and apps. There you go.
This will further protect the operating system from changes, increase stability and set us up to deliver future security benefits. Now some of you may have made assumptions in your app or your installer, and you'll want to check that it works seamlessly on macOS Catalina.
Finally, enhancements to app and data protection. We have spent the last few years adding additional data protection categories so that users are in control of which apps can access important files like your photos or sensitive sensors like your camera and microphone on your Mac. In macOS Catalina, we're continuing this work by ensuring that apps seek permission before capturing input events, so things like key presses or screen recordings.
And we're also going to protect user data on your Mac, so apps will now have to seek permission before accessing the files that users keep on their desktop, downloads, documents, iCloud drive and external drives. Yeah.
We are really excited about all the enhancements that we're bringing in macOS Catalina. Now another platform that's got some really big changes this year is watchOS. And to tell you more, I'd like to invite Lori up on stage. Lori? Thanks, Sebastien.
This morning we introduced a bunch of cool new features in watchOS 6, including new health apps like noise and cycle tracking, activity trends, audiobooks and more. But the real story for watchOS 6 is that it's now possible to declare independence from the phone and build fully watch-focused experiences. Thanks to cellular connectivity, customers are increasingly leaving their phones behind and enjoying the freedom using just their Apple Watch to stay connected. From running errands to running workouts, from listening to music to chatting with friends, we want all users to enjoy great Apple Watch experiences without limitations. And independent watch apps make that possible.
We've taken a good look at the challenges of developing for Apple Watch and worked hard not only to bring you new APIs that make it possible to support independent experiences, but also to completely revamp the experience of being an Apple Watch developer.
What if I told you it was possible to create a watch app that's only a watch app? If you've got an idea for a great watch-only experience, Xcode now makes it simple to create a watch app that's just a watch app. So you can pursue your idea without also having to build an iOS app.
And if you already have an iOS app, you can still build your app to be completely independent of its companion thanks to a couple key changes we made in watchOS 6 to support watch-only apps, including making Apple Watch a standalone push target. You now have the option of sending notifications directly to the watch so you can update both your users and your apps data without relying on the phone to mediate.
We're also supporting Cloud Kit subscriptions and complication pushes to help you keep your app up-to-date.
And since asking users to sign in on iPhone is not an option when you don't have an iPhone app, in watchOS 6 we're giving you text fields so you can offer account creation and sign-in options directly on Apple Watch.
If you want to make account creation really easy, you can even add an Assign with Apple button to your app to let your users set up an account and sign in with the Apple ID they already have. No new passwords or text entry required.
With watchOS 6 we're also addressing a common watch-only use case by bringing streaming audio to watchOS.
We introduced background audio playback in watchOS 5 for local files.
And now in watchOS 6, we've brought three ways to stream audio directly to Apple Watch by making Network, framework, NSURLsessionStreamTask, and even more of AVFoundation available to you.
We also recognize that there are use cases beyond audio playback, workouts and navigation where you need to keep your app running order to complete a task. For example, a meditation session.
In watchOS 6, we're introducing a new extended runtime API that gives more apps a way to stay running even after the user lowers their wrist.
This enables new app experiences in self-care, mindfulness, physical therapy, smart alarms and health monitoring.
That's a lot of new APIs and capabilities. If only you had more options for creating a compelling user interface, right? We know you've been asking for a more advanced UI framework on the watch for years. And in watchOS 6 we finally have one with SwiftUI.
You've already seen SwiftUI on iOS. That same declarative language for defining beautiful user interfaces is available for watchOS as well, expanding what's possible on the platform.
From lists with swipe to delete, reordering and carousel filing, to direct access, the digital crown, it's easier than ever to create a compelling watch experience.
Let me show you how to start making use of some of the new independent app features with SwiftUI.
Okay, so I've got my travel app running here in the Simulator and I've already started updating it using SwiftUI, so it's starting to look great. But I still have some work to do beyond layout because my Sign In button currently just asks users to sign in on iPhone. And my users have told me that is not what they want. They want to be able to do everything right on their wrist.
So I'm going to quit the simulator and go over to my project file. And I'll move to my travel watch extension target and declare independence from phone by checking the Supports Running Without iOS App Installation box.
Next I'm going to go to the Sign In view that I've already started. I'll resume my previews.
Great. And you can see I have a Sign In button here and two previews. The top one is for English which is the language that I speak and the bottom one I'm starting to experiment with localizing my app into Arabic which is a right-to-left language.
So the first thing I'm going to do is add a field for my Username button. And I'm going to bind this to my -- oops.
To my username state so that the field updates as the value changes.
Notice I've set the placeholder text to username so I give the user a chance to figure out what to do with this field. And I've also set the content type to username so that password and username autofill works when using continuity keyboard.
Next I'm going to add a password field, and for this I want to use a secure field so that people can't spy on me when I'm typing my password. And again, I'm going to bind this to my password state.
I've got a password placeholder text and again I'm using the content type of Password for autofill purposes. So that looks great in both English and Arabic. And for Arabic it's pulling the strings right out of my localizable strings file. This is not placeholder content.
Okay. Above this, what I want to do is add a Sign In With Apple button because I think that's how users are really going to want to sign in.
So now I put that right at the top and then add a separator so it's clear that users have an option of signing in with their Apple ID or creating a customer username and password for my app. That looks great.
So the last step is to go over to my hosting controller and change my destination for my presentation button to the Sign In view that I just created instead of the Sign In on iPhone view. So I got that going and now I want to turn on Live Preview, so all my buttons become interactive. And then when I click on my Sign In button, I get my form. Sign In With Apple, or sign in with a username and password. That looks great. And that is creating a sign-in form on the Apple Watch with SwiftUI. Okay. So you've got the tools to build a great independent Apple Watch experience. How are you going to get your app in front of customers with the least friction possible? The App Store and Apple Watch will be highlighting great independent apps through curated collections and editorial selections at the top level of the App Store. We're emphasizing independent apps here so users can get the instant gratification of being able to download and start using your awesome apps right away, whether they have the phone with them or not.
And when you dive into individual product pages, you'll see that this isn't just a pared down experience. Users will see full featured app descriptions, screenshots, reviews and more.
They can search for apps with dictation or scribble.
And they'll be able to download your apps directly to their wrists, thanks to app and asset thinning which make it possible to deliver a small bundle with only the architecture and assets that makes sense for the current watch.
If you have both an iOS and a watchOS app, this will make your iOS app smaller too, as we're no longer downloading the watch bundle to the phone and then shuttling it over.
This is truly a whole new era for Apple Watch apps to be more functional, more beautiful and more independent than ever. We think both you and your customers are going to love this.
And now to talk about the platform that we just declared independence from, I'd like to welcome Cindy to the stage.
Thank you, Lori. iOS 13 is a big release.
You saw this morning that we have a ton of new features and enhancements like a redesigned share sheet, a Quick Type keyboard and a brand-new CarPlay experience.
In addition to all of that, we took a good long look at our UI and gave iOS 13 a brand new look.
This new look includes Dark Mode, cards, contextual actions and symbols.
Let's dive into the incredibly cool new Dark Mode.
Dark mode keeps the brightness down and gets Chrome out of the way so you can focus on just content.
The entire system has been really thoughtfully updated and refined to look amazing. Your users are definitely going to want this. And to help you bring these same refinements to your apps, we've created some new APIs designed specifically with Dark Mode in mind.
But first it's semantic colors.
There are new colors for backgrounds, fills and text.
And in Dark Mode, they have multiple variants to give your app a visual hierarchy. Now what does that mean? Well, when your app is full-screen, its background is pure black.
To ensure sufficient contrast, UI presented above it takes on a brighter color palette.
When multitasking on iPad, the slide-over app and side-by-side apps also render in these lighter layer colors.
There is a lot of nuance to this design, but you'll get it automatically with semantic colors.
And for when you need a pop, there's a bright palette of system colors that all have variants for the increased contrast accessibility mode as well variants for Dark Mode.
There's also a brand-new set of materials and vibrant content filters with varying levels of transparency so you can create UI that looks great over any content. And just like semantic colors, these materials support both light and dark variants.
And they will automatically update based on changes to the UI Kit trait collection.
Adopting semantic colors and adaptive materials will help you provide a unified look that automatically adapts to your environment.
Another component of iOS 13's new look is cards.
Since the original SDK, the default presentation style on iPhone has covered the full screen.
We're changing that default to a much more fluid card presentation.
Cards provide a visual stack so you can see at a glance that you're in a presentation. And even better, they're dismissible with just a single downward swipe.
Yeah. Swiping. We've also updated the Peek and Pop experience.
It's now quicker and easier to access contextual actions throughout the system. And they're backed by a brand-new API designed to work across all devices. So not only are they better than ever on iPhone, but they look great on iPad as well.
And when you bring your iPad app to macOS, they'll look great there too.
Yeah. While we were going through the system, making all those thoughtful refinements, we started thinking about symbols.
Most apps use symbols. They are a really useful way to convey information. And symbols are very often used with text. But text has some great properties that in iOS 12 our symbols just didn't have.
So as you can see here, the text is scaling nicely as the dynamic type size increases, but the symbols stayed the same.
Ideally, we'd want the symbols to scale along with the text.
So we created SF Symbols.
SF Symbols have all the expressiveness and behavior of a font but packaged up as a UI image so they're really easy to use in your apps. iOS 13 includes an absolutely massive catalog of over 1,500 SF Symbols for you to use. And they're easily searchable right within Xcode and using standalone SF Symbols app on your Mac.
Symbols. So now you can see the symbols scale along with the text for better legibility and a more consistent layout at larger sizes. And because they behave just like a font, they're available in all of these weights as well.
All of this just scratches the surface of what's available in iOS 13. There's a new share sheet API to let apps have recipient suggestions. A new compositional layout API to make collection views easier to work with than ever. And a screenshot enhancement so apps can provide full page views of long content. And so much more.
And in addition to all of that, we really wanted to bring iOS forward this year.
So we gave it its own operating system complete with major enhancements to multitasking, a new PencilKit framework and a whole suite of productivity gestures.
Let's start with multitasking.
At iPadOS, your app can be open in multiple spaces at the same time, as well as in the slide-over stack, and display different content in each space.
To enable this, we're introducing a new UI window scene API.
Each window scene represents a single instance of your app's UI.
Prior to iPadOS, your app delegate was responsible for both its process and UI lifecycle.
With window scene, we're splitting out the UI portion of that into a new scene delegate object so it can be managed independently.
And since they're completely independent, your app can now manage multiple at the same time. Your users can even use drag and drop to allow individual items from your apps such as a single window or message to be opened in a brand-new window scene.
With this new capability, it's really important that your users can resume whatever they were doing in any scene at any time.
To make this easy, we've built a new state restoration system based on NSUserActivity.
You're probably already familiar with this versatile API. It's used for handoff, search, indexing, Siri, and now for window scene state restoration.
One of the things that really sets -- you can clap, it's fine. One of the things that really sets iPads apart is Apple Pencil.
We're introducing PencilKit which allows you to easily add smooth low-latency drawing to your apps.
This is the same engine used in Apple apps like Notes, Markup and Screenshots. So you get all of those same features and tools right in your apps. You can even use the canvas and palette functionality separately and just pick and choose which pieces make sense for your use case. Finally, let's talk about productivity gestures.
We've made text selection much easier. You can now just drag your finger along text to select it.
Text views and web views are automatically updated with this new selection gesture.
And there are new three-finger gestures for undo and redo.
Swipe three fingers left for undo and right for redo. These new gestures use the existing NSUndoManager so you don't have to do anything at all to adopt.
If you'd like easy text selection outside of text or web views, or if your app already uses three-finger gestures and you have a conflict, you can use the UITexInteraction API to fix up those issues.
And for scroll views, you can now drag the scroll indicator to jump directly to a location in the scroll view.
To enable this behavior, just turn on Show Scroll Indicators. For this one it's really important that your scrolling is performant as we might have to load all of the cells in a frame at the same time. We think our users are going to love the powerful new things iPadOS gives them, and we cannot wait to see what you do with it.
So I'd like to welcome Sebastien back to the stage.
Thank you, Cindy. Now as you've seen, each of our platforms has incredible new features that refine the experience that each offers and gives them great new capabilities. And across all of our platforms we build a range of technologies that are designed to give your apps a huge head start so that you can build the latest technologies right into your app.
There are a few of these that we'd like to focus on this afternoon and they cover a pretty wide range of capabilities from how we open our platforms and apps to all users to how we combine the virtual and real-world with augmented reality. And so we start with accessibility, and to do that I'd like to welcome Eric Seymour on stage. Eric? Thank you, Sebastien.
So we all know that technology plays a powerful role in people's lives. But this is especially true for people with disabilities. Technology can be instrumental in fostering independence, employment and empowerment. At Apple, we're guided by a few key principles for accessibility and it begins with accessibility being built in. People should be able to use our products out of the box, and that includes people of all abilities.
Accessibility should be comprehensive. People should have access to the whole platform, every corner of the OS, every corner of your apps. And perhaps most important, we want to surprise and delight all users regardless of ability. And so this is more than just about fixing accessibility bugs. This is about using your features with accessibility and striving for an experience that's great, that's just as inspired as your original design. When we think about accessibility, we're really talking about a broad continuum of abilities. Hearing, vision, physical, learning. And within each of these areas, we're focused on different conditions.
So for example, for vision, we of course have Voiceover, our screen reader for people who can't see the screen. But we also have over a dozen vision-related features from zoom to large text. And when we take this approach and we apply it to that broad continuum of abilities, we're talking about dozens of accessibility features. And it really underscores the notion that accessibility is for everyone.
Probably most of you use at least one accessibility feature. And if you don't already, there's a good chance you will eventually. This year we're introducing several new accessibility features and enhancements, and today I'm going to talk about two, starting with discoverability. In the spirit of accessibility being for everyone, we wanted to make it easier to find. And so to that end we've added accessibility to iOS Quick Start, making the out-of-box experience even more accessible. Also we've moved accessibility to the top level of settings. And we've reorganized it to make things easier to find. We think it's going to go a long way to help people discover and use these great features. Now let's talk about voice control, right? We saw this this morning during the keynote. Voice control is this full voice experience from macOS, iOS and iPadOS and we think it's going to be really helpful for people with physical challenges. Voice control provides comprehensive platform access. You can speak to items by name. You can refer to items by number. You can even speak to regions of the screen using a grid. Voice control has got great text editing. So of course I can dictate text but I can also make selections and corrections using only my voice. And it also has awareness. So effectively even when I'm dictating text, it hears commands and it doesn't make me manage that distinction. I can just talk to it. And using the true depth camera, if I look away, it knows that it can ignore me.
Voice control's got great spoken gestures so of course I can do simple things like taps and swipes. But I can also pre-record more complex gestures that I might want to use in an app or a game, like this rotate gesture.
And of course voice control speech recognition runs fully on device. And so now I'd like to show you voice control in action.
And for this demo I'm going to be talking to my iPhone.
Open messages. Hey Chris, let's grab dinner tonight.
I'm thinking pizza. Pizza emoji.
Change tonight to this weekend.
Tap send.
Undo that.
Tap send.
Undo that.
Tap send.
Open Maps.
Tap search field. San Pedro Square.
Show numbers.
Five. Show grid continuously.
15. Zoom at one. Repeat four times.
Swipe up at 27. Hide grid.
Tap share. Tap Chris Adams. Lots of options around here, period. See you later. Peace emoji.
Ah, look at that. Undo that.
Peace emoji. Tap send.
Undo that.
Tap send.
Go home. Go to sleep.
Okay. So that's voice control.
Now -- Now we can also use voice control as developers to test the accessibility of our apps. And so let's do that now with the travel app that you saw earlier. Wake up. Open Travel.
Tap San Francisco. Tap San Francisco. Show names.
All right, here's the problem. So I'm trying to tap on San Francisco, this element, but it doesn't have a good accessibility label yet and it's a really common problem. It means I can't speak to this element with voice control, and even worse, if I couldn't see the screen and Voiceover were reading this to me, I'd be completely out of luck, stopped in my tracks. I would not be able to use this app. So fortunately these things are pretty easy to fix. And so let's talk about what you can do to make your apps more accessible.
The good news is, most accessibility features just work. But some of them, indeed the most transformative features like Voice Control and Switch Control and Voiceover, they need your support. And so here's what you can do. First, do what we just did. Just try it. Use your apps with accessibility features. You might actually be surprised at what already works. But more importantly, you're going to gain valuable insight into how some users actually experience your app. And you're probably going to want to make some changes.
Next, use the tools. Xcode's got great built-in accessibility support for developers. You can edit accessibility properties right in the Xcode inspector. And with new Environment Overrides, you can preview visual accessibility accommodations during your development lifecycle right in your app. It's really cool.
Finally, implement the Accessibility API. It's the best way to ensure an accessible experience. It's the essential way. Doing this well is like putting out a welcome mat to your app for users of all abilities. It's how Voiceover and Switch Control and the rest talk to your app to offer an adapted experience. The Accessibility APIs work on all platform, and while they're easy to implement, they're super powerful. So even the most sophisticated apps and experiences can be made fully accessible. And of course, SwiftUI has great accessibility support built right in. And so that's our accessibility update today. Now another thing we care deeply about at Apple is privacy. And so to tell you more about that, I'd like to hand things over to Katie. Thanks very much. Thanks, Eric. Privacy is a topic that isn't going away. And it's something that everyone needs to pay attention to. It's something you have to design in from the beginning and it shapes how your product works.
When you're designing a new feature, here are a few steps that you can take to design for privacy.
Process on the user's device. Wherever you can keep user data on-device, do it. And this helps you to collect as little data as you can. If you don't have the data, it can't be abused or stolen. Ask first. Ask permission from your user for the data and how you plan to use it. And if you do collect data, use random identifiers. And scope them down from an account to a device, to a session where possible. And encrypt to keep your users' data secure.
Applying these principles in your design process will help you build great features and great privacy. I want to talk about two areas where we've made it easier for you to take these steps. First, location.
Where you go can reveal a lot about your life.
Where you live, where you work, what doctor's office you might go to, or how often you're hitting the gym versus maybe the bar.
Because of this, some users are hesitant to share location with you and your apps.
So they might miss out on some of your key features.
So this year, we're adding a new option: Allow Once.
This provides location access for just that session and will ask the user again next time. But let's say your app is even better with Always Allow Location permission.
Here's how this will now work.
First the user needs to select While In Use.
Then you request location while your app is in the background.
Then the user will be presented with an alert, letting them know that you're requesting location in the background. If they change to Always Allow, you'll have background location access moving forward. Finally, we're giving users more transparency into how their location is being accessed.
For all apps with background location permission, from time to time we'll show them where your app accessed their location.
With these changes to permissions, users will feel more comfortable in how they're sharing location with you.
Now, let's talk about login.
We've all seen or maybe implemented buttons like these. And they can be really convenient, but they can come at the cost of your user's privacy.
They also might share more information about your company's business than you really want to be disclosing.
So we want to offer a better option. And it's called Sign In With Apple. It offers fast, easy sign-in without all the tracking.
This isn't just about privacy for our users, but also for your company.
It's not our business to know how users engage with your app, so Apple simply won't track that. It's easy to add a Sign In With Apple button to your app with a simple API.
Users can set up an account and sign into your app with a tap and a quick Face ID. So why is this great for all of you? First off, more trust and less friction equals more engaged users.
Sign In With Apple can shorten the distance between a user considering your application and really embracing it.
Second, verify email addresses.
Apple has already done the work of verifying email addresses for you.
And we're removing the incentive for users to share made-up email addresses by offering a private email relay service. So even if a user chooses to hide their email address when setting up an account, your email will arrive in their verified account, their verified inbox. And then there's security. With Sign Into Apple, you don't need to deal with storing passwords or password reset issues. And every single account is protected with two-factor authentication.
This can really improve your security.
We've also integrated some interesting innovations around anti-fraud. We all know that along with some real users, sometimes you get some not so real users.
Nobody wants bots or farmed accounts. And we work hard to filter them out of our systems. And we want to help you do the same. So we built what we call a real user indicator. It can tell you if an incoming account is a real user or if you might want to do some additional verification.
So how does this work? First off, the whole system is built from the ground up to maintain user privacy. It uses on-device intelligence to determine if the originating device is behaving in a normal way. The device generates a value without sharing any specifics with Apple.
This is combined with select account information and then boiled down into a single value that's shared with your app at account setup time. Then depending on the value that you receive, you can be confident that your new user is a real user or get a signal that you might want to take a second look.
And all of this comes with great cross-platform support. It's available on iOS, iPadOS, macOS, watchOS, tvOS and it even works on the web.
So it can work on Android and Windows devices.
So there you go.
A super-fast and easy way to engage new users, two-factor authentication and anti-fraud built in.
You can implement it virtually anywhere, and most importantly, it respects everyone's privacy.
So this is a solution both you and your users can trust.
We've already had a number of developers working with us and we're excited to see many more of you adopt. So that's Sign In With Apple.
As I mentioned earlier, a great way to preserve user privacy is to work with the users' data on-device. And we've built some great technologies for doing just that. To tell you more about machine learning, I'd like to hand it over to Bill.
Thank you, Katie.
Machine learning is a key technology for so many of the experiences in your apps. And at Apple, we use on-device machine learning to power features from stunning camera and photos capabilities to ARKit and more. And we can do this because of our cutting-edge silicon.
With powerful CPUs, GPUs and dedicated ML processors like the Neural Engine, we can deliver incredible real-time experiences.
The Neural Engine is optimized to accelerate convolutional neural networks with multi-precision support and a Smart Compute system.
What does that mean? It means it's an absolute beast. In fact, the Neural Engine is capable of up to 5 trillion operations per second. Best of all, we've built our machine learning APIs on top of this so that your apps can take full advantage of this blazing performance.
And we have some great updates, starting with our out-of-the-box APIs like Vision, Natural Language, and Speech.
Today these APIs deliver rich features such as face detection, object tracking and named entity recognition. And this year, we're adding even more. Let's have a look at a few of these, starting with image saliency which gives you a heat map for an image, highlighting important objects and where users are likely to focus their attention.
We use this today in photos to help intelligently crop images as part of the curation experience. We're also releasing text recognition where you can search text from images like posters, signs and documents. And take advantage of the document camera capability we use in Notes.
For Natural Language, you can make use of word embeddings which help to identify words or sentences with similar meanings.
We use this today for search in photos so that if you search for an unknown term like musician, we can suggest alternatives like entertainer or singer.
And this year, our Speech API is now on-device and works on iPhone, iPad and Mac with support for 10 languages.
And with features like Speech Saliency, you can understand the pronunciation, pitch and the cadence of speech.
Now for those of you who want to go deeper with machine learning, you can make use of Core ML, our on-device technology designed to run machine learning models with high performance and privacy.
Today Core ML has great support for many machine learning models, from neural networks to boosted trees and more.
But as you know, the field of machine learning is constantly evolving. And so this year we set out to support the most advanced neural networks by adding more layer types than ever before.
In fact, Core ML now supports over 100 model layer types.
This enables you to run some of the most cutting-edge machine learning models on Apple devices.
Models like ELMO or WaveNet or some very recently published ones like BERT, bringing breakthrough natural language processing to your apps.
Now running models like these in your apps is only part of the story.
There are times when you may want to update the models in your apps on-device based on user data.
We do this today for features like Face ID where a user's appearance may be evolving over time. They change their hair, wear a hat.
Or for features like our Siri Watch Face where the set of recommendations is constantly evolving to deliver a personalized experience for each user. To achieve these experiences, we use on-device personalization. And this year we're bringing that capability to Core ML.
This means you can update the Core ML models in your app with data from individual users.
This creates -- This creates an updated and personalized model for the user.
With model personalization, your apps can now update models in the background without compromising user privacy.
Core ML delivers the most advanced platform for machine learning models, and building Core ML models has never been easier with Create ML, our framework designed to help all of you build models with just a few lines of code.
And this year we're taking Create ML even further. It's now a macOS app that lets you build models with zero code right from your Mac.
You can choose from many different model templates to fit your data. You can build multiple models with different datasets and define the parameters for each of them. You get real-time feedback on model training.
And Create ML supports transfer learning for tasks like image classification or text analysis. This speeds up training since you need very little data and can leverage Apple's optimized and heavily pre-trained models.
And you get to experiment and preview the models. So for example, you can get predictions for images by using your iPhone's camera with continuity on your Mac. Or you can use the microphone on your Mac to test your sound classification model.
So that's a ton of new stuff and we're super excited to see what you can do with all these awesome new machine learning capabilities.
In fact, we invited a few developers to try out all the new stuff and we've seen some amazing results.
One in particular was so cool we decided we had to share it with you. So please welcome Ben Harroway from Lumen Digital to give you a preview of his new app NoisyBook. Thanks, Bill. Hi, everyone, I'm Ben from Lumen Digital and I've been working on a brand-new app, NoisyBook.
Let me tell you a story. Once upon a time, on a beautiful meadow lived a boy called Jack and his cow Daisy. Daisy.
A mysterious man gave them some magic beans which grew into a giant beanstalk, high into the clouds.
Okay, I think everybody knows this story. Let's try something really different.
Suddenly an exploding chicken and his friend the golden tiger jumped into their helicopter and flew into the forest. And of course, guess what? They all lived happily ever after. Yay.
Can you make animal noises you heard in the story? Okay, we've had some fun. Now NoisyBook wants us to repeat some of the animal noises that we heard during the story. I think we heard a cow in this story, so let's try this.
Moo. There he is. I really cannot believe I'm standing here making animal noises in front of all of these people. Mad. But how amazing, the app has used a sound classification model to actually recognize that noise and acknowledge it.
You likely also noticed NoisyBook was able to work with both traditional stories and stories straight from our own imaginations. It's super powerful. And thanks to the new features of speech, sound and Core ML in iOS 13 and Create ML, this is all happening entirely on-device.
It's all happening in real time and it's running through a natural language model that I've trained on over 90,000 lines of text.
And thanks to these features, I've been able to take an idea that I've struggled with for around two years and really implement some of these magical new features in just a couple of days.
I'm super proud of it and I do hope that you'll remember to check out NoisyBook when it lands on the App Store later this year. Thank you. Thanks, Ben. That was really cool. I know my kids are going to love it. Now one of the biggest uses of machine learning at Apple is Siri. Siri is by far the world's most popular intelligent assistant with over 500 million monthly active devices, making over 15 billion requests. These are staggering numbers. And Siri works across all of Apple's devices. With Siri, your users can interact with your apps in new ways. On the go, with Air Pods, hands-free from across the room, or even while in the car. And thousands of apps are now integrated with Siri through Siri Shortcuts.
We built Siri Shortcuts to allow you to expose the capabilities you already have in your apps with very little work and in a discoverable way for your users.
You can make your shortcuts discoverable using the Add to Siri button, educating your users on how they can use your app with voice.
That matters because voice functionality can otherwise be really hard to discover.
And we've simplified setup so that the user no longer needs to record a phrase. You suggest a phrase and they add it with a tap.
And the biggest request we had this year was to support parameters in Shortcuts. So we've made Shortcuts conversational which allows users to interact with your app through questions in Siri.
So for example, if I'm choosing what to cook, I could run a Shortcut with Pana, my recipes app, and see a list of all my favorites. When I choose from the list, it takes me to the recipe and starts playing. And this year the Shortcuts app is built into iOS and iPadOS, which means that every user will have an opportunity to try it out. And the app is now the home for shortcuts from your apps too.
And by popular request, we're adding support for automation. Which allows users to set specific triggers for when to run any shortcut. And there's plenty of options to choose from. You can trigger a shortcut based on time of day, when you start a workout on your Apple Watch, when you connect to CarPlay and many more.
And the editor now enables full configuration of your app's actions, including the ability to pass information in or out of your action through parameters.
With this, your app's actions can be combined with actions from other apps in multi-step shortcuts.
Let's say you need to get dinner for the family. The kids are hungry, you need it fast. You could have a shortcut that uses the Caviar app that lets you choose a restaurant, choose a meal, place the order and then text the whole family with what's for dinner and when it will arrive.
That's combining the power of your apps with Siri Shortcuts to make everyday tasks really easy.
And of course -- And of course Shortcuts work across iPhone, iPad, Apple Watch and HomePod too. And that's our update for Siri. Now I'd like to invite Jeff to tell you about the latest advances in augmented reality. Thank you.
Thanks, Bill. I am thrilled to be here today to talk about augmented reality.
AR helps you visualize things that are difficult, expensive or impossible to do otherwise. And since introducing ARKit, we've seen amazing growth in applications. One may think of AR as only for entertainment, but we've seen great applications in education, enterprise, commerce and more.
Commerce is a particularly impressive use case with Home Depot, Target and Wayfair all having tens of thousands of products available to preview in AR. ARKit hosts the USDZ file format and Quick Look together, make the world's first mass market augmented reality commerce solution. In fact, Wayfair is seeing more than a threefold increase in purchasing when folks view their products in augmented reality.
And we love that this is a real business use case. This is a great real business use case for augmented reality in commerce.
We'd like to continue this momentum by announcing that Apple Pay will be integrated directly with AR Quick Look this fall. This makes it easier for consumers to try on and buy items like these glasses, directly from augmented reality. ARKit for iOS and iPadOS are the world's largest augmented reality platform with hundreds of millions of enabled devices. And we've heard from many developers, they love to take advantage of this great opportunity but may not be sure where to start. Or 3D can be a little bit intimidating if you've never used it before. Well, we've been listening and we're really excited to announce three technologies that make it much easier to develop augmented reality applications.
ARKit, RealityKit and Reality Composer together provide the frameworks and tools you need to quickly and easily develop augmented reality applications and experiences. Starting with Reality Composer, you can create compelling AR experiences even if you've never worked with 3D before. It provides an intuitive, what you see is what you get interface that integrates seamlessly with Xcode.
And to show you Reality Composer, I'd like to invite one of my colleagues, Shrudi, up to the stage. Thank you, Jeff. Happy to be here. I have this great travel app which shows some activities offered on the main island of Hawaii. If the user opts for helicopter tours, the app shows the path of the helicopter. How about we use AR to provide users a better sense of the actual tour? I can do that by adding a button to the existing app to launch my AR experience. Let's see how to do that. First, we create a button using SwiftUI.
Followed by adding that button to my existing view.
Then I open an empty project file in Reality Composer and integrate it to my Xcode project by simply dragging and dropping it in Xcode. To load my AR scene from this Reality Composer project file, I import RealityKit and then create a new view for AR. Oops. Sorry. Create a new view for AR using SwiftUI. And that's all the code you need to add an AR experience to your existing app. Moving on to the fun part of creating my AR scene with Reality Composer. I open my empty Reality project and start by loading a custom USTZ of the Hawaii model.
Sweet. Next I'd like to mark the beginning of my helicopter tour. For that I can use Reality Composer's built-in content library which offers hundreds of professional-grade 3D content to developers. I'll use a simple sphere.
I can change the content's look, by applying a different material to it.
As you can see, placing content in 3D is pretty simple and intuitive with Reality Composer.
Let's see what else we can do here. How about adding a cool fading effect to the scene when the scene starts. I can do that by opening the Behaviors panel and creating a custom behavior which gets triggered when the scene starts. I first add an action to hide all the content in the scene. Then the scene starts, and then add another action to make all the content appear after a certain duration. How about we preview it right here? Awesome. Developing AR on Mac is convenient, but it poses the challenge of guessing the content scale and look when placed in real world. That's why we created Reality Composer for macOS as well as iPadOS and iOS to remove the guesswork out of development. So I'll hand this off to Jeff to see what we have so far on an iPad.
Thanks very much, Shrudi. So this is Reality Composer for the iPad.
It has the same great features that you see in the Reality Composer for the Mac. And we can take the seen that Shrudi handed off and finish it out with our final artwork. So we've had someone create with Adobe Arrow our final file or our final artwork, and we'll put it into the scene. So I'm going to take the proxy art that Shrudi had. I'm going to replace that with our new artwork. Let me check to see if that's the right thing. Fantastic. That is our final helicopter. And I also want to bring in the animation that goes with that. That's super easy. If you remember, she created that behavior, so we're going to look at that behavior and all we're going to do is add an additional action.
So we look for USDZ animation which is bringing in the animation that went with the file.
Fantastic. Looks good. Let's preview that. Great. So hide our Behaviors tab. That looks like what we want.
Perfect. Let's preview this in AR, but you can do it with the iPad.
Wow. Let's try that again.
Fantastic. That's exactly what I wanted it to look like. And we can also play that.
Perfect. We have the animation of the helicopter touring the island. That will look great in our travel application.
So that's Reality Composer for the iPad.
And you're going to love how you can have the same great ease of use and seamless experience between macOS, iPadOS and iOS with Reality Composer. Now RealityKit. RealityKit is a modern high-performance 3D engine designed from the ground up for augmented reality rendering and simulation.
And because it's delivered as a framework, it's very easy for all of you to take your 2D apps and extend them into 3D.
RealityKit uses modern visibly-based rendering and materials. It is a data-driven rendering system and a fully multi-threaded renderer that's highly optimized for Apple's GPUs.
And also, really importantly, we've integrated ARKit scene understanding into RealityKit. Which means as ARKit leans more about the environment, it synchronizes this to your virtual scene automatically. We saw RealityKit in action this morning. Let's take a closer look.
Let's see what's really going on.
The reality you see is based on things like image-based lighting, motion blur and camera effects like depth of field and camera noise that really blur the line between what is reality and what is virtual. And you get these features with RealityKit automatically.
You access RealityKit through a new framework which is a native Swift API. It takes many advantages of the key features of Swift to allow you to write clear, compact code.
Concepts Log and Rally are directly integrated. For example, it's easy to load your AR assets and directly attach them to anchors. Protocol extensions provide easy access to entity property which allow you to quickly access components such as lights or shadows in this case and reduce the need for runtime checks.
This also means that you're able to work with entities in a strongly-typed manner. Here we're applying an angular force to an entity that participates in physics. And that's all the code you need for this scene.
Last but definitely not least today is a new version of our augmented reality framework AR Kit 3.
We've taken the most capable AR platform in the world and made it even more powerful with new in-depth reverse features.
Since introducing ARKit, we've had many developers ask to be able to use the front and back cameras simultaneously. Well, in ARKit 3 you can. So you can -- that's right, both cameras at the same time. This allows you to use face tracking to drive your augmented reality experiences directly.
And as Craig talked about this morning, properly occluding people in an AR scene is an extremely tough problem. You see it every time someone walks in front of a virtual object.
To solve this, we've built an advanced machine-learning algorithm that figures out which pixels are a person, the depth of that person in the scene and uses its information to allow us to properly render the scene in the virtual objects. With people occlusion, entirely new experiences like Minecraft Earth Demo you saw this morning are possible. Absolutely amazing. And finally, we built a system that allows humans to interact with virtual content. ARKit 3 is able to capture a person's motion in real time with just the single RGB camera in an iPad or iPhone.
We again use a machine learned algorithm to track the person, building a 2D stick figure and take that figure and then infer a 3D motion from them or lift it into 3D.
Both the 2D skeleton and the 3D skeleton are available to developers. The 3D has over 90 articulated joints and provides the same ease of use as Face Kit.
So those are our new technologies. ARKit 3, RealityKit, and Reality Composer are tools and frameworks that make it easy for anyone, anyone, to build amazing AR experiences. And we'd like to do something fun today, so we have a fun new application at the conference. You may have seen it, SwiftStrike.
We're making a tabletop version of this as a developer sample available today. It uses RealityKit, ARKit 3 and Reality Composer and provides a great starting point for your applications.
Lots of fun. Thank you. And of course Metal powers a lot of what we do in AR on our devices. And to tell you more about what's new in Metal, I'd like to welcome Jeremy to the stage.
Thank you, Jeff.
So Metal is Apple's modern high-performance GPU programming API for graphics and compute. It's also incredibly easy to use, both for beginners and experts alike. And it brings stunning performance increases, supporting up to 100 times more draw calls than OpenGL and enabling a whole new generation of advanced graphics performance.
This is because Metal gives your app direct control over the GPUs that are at the core of Apple's products. And those GPUs now power over 1.4 billion Metal-capable system from iPhones to iPads to the all-new Mac Pro. In fact, all of Apple's platforms now run on Metal. From our smooth user interface to the latest 3D rendering in RealityKit, to our advanced camera processing pipeline, we're using Metal everywhere. And you can too. To help you do just that, this year we focused on three key areas. We've made Metal even easier to use. We've enabled all-new levels of high-performance GPU compute. And we've enhanced Metal for our most demanding pro app developers and customers.
First, with Metal's incredibly approachable API and GPU shading language, you can get started with our powerful suite of developer tools for GPU debugging, profiling and performance optimizing. And we've made those tools even better. We have added full Metal support to the iOS Simulator in Xcode. We're glad you're excited about it. We're really excited about it too. You can now use Metal directly in the simulator and you automatically get major performance improvements when using UI Kits, Maps and all of those system frameworks built on Metal. And this is because the iOS Simulator is now using the native Metal support built right into your Mac.
We've also added an all-new Metal memory debugger. You can now identify exactly how much memory your app is using for Metal textures, buffers and heats and you can optimize your games and apps to use every last byte for even more advanced graphics.
Now over the past few years, Metal has grown to support the advanced features of dozens of GPUs, each with their own hardware from every major GPU vendor and across all of our platforms and OS releases. And as a developer you previously had to manage all of this complexity of these different hardware feature sets yourself.
Well this year we've made it much simpler with just three Metal GPU families. A Metal common GPU family, identifying the vast majority of Metal features that you can use across all of our platforms. A second family for the advanced unique features of our Apple Design GPUs and our iOS, iPadOS and tvOS products. And a third family for the powerful GPUs on our Mac systems. And it makes it that much easier to bring your apps from iOS to macOS or the other way around.
Now, in addition to enabling immersive games and advanced graphics, Metal also gives your app the ability to harness the GPU for compute. So what is GPU compute? Well, GPUs were originally designed to process large numbers of pixels requiring the execution of complex mathematical computations in a massively parallel fashion. And it turns out we can apply that computational horsepower to a wide variety of tasks besides traditional graphics. So Metal provides all of the building blocks that you need for general purpose computation on the GPU. A familiar C++ based GPU programming language, compute command encoding, API and runtime, a full-feature compiler and debugger and a rich library of shaders and kernels called the Metal performance shaders.
This MPS library provides you valuable compute functions all pre-optimized for all of those GPUs and all of those Apple systems and it's all fully integrated right into your Metal code.
And on our Apple Design GPUs, Metal also provides advanced compute features like tile shading, enabling you to combine your compute shaders and your fragment processing into one simple, highly-efficient render pass.
And this year we're also introducing Metal indirect compute command encoding. It allows you to build your GPU compute commands right on the GPU itself, unlocking all-new algorithms for compute efficiency and freeing the CPU for other activities in your app. And with the Radeon Pro Vega II, the new Mac Pro is a GPU compute monster, capable of up to 56 teraflops of GPU compute all made available to you via Metal. Now that's a heck of a lot of flops. I mean, look at them all. They barely fit on the screen. That's a lot. So what can you do with all of those flops of GPU compute? Well, with Metal you can use them for advanced compute processing. For your videos, you can improve the quality of your photos. You can train your ML models and you can use them to accelerate interactive ray tracing.
So we have further improved Metal support for ray tracing this year, now enabling dynamic scenes by moving the bounding volume hierarchy construction from the CPU to the GPU, and added all-new optimized MPS de-noising filters to further improve image quality. Now ray tracing, it uses the GPU to computationally model the physical properties of lights and surfaces and reflections and it can be so complex, people actually earn PhD's in this topic. So to show you how you can use Metal and GPU Compute for ray tracing, we decided to put together a pretty simple example. And I'd now like to invite Rav to the stage to give you a quick demonstration. Rav? Thank you, Jeremy.
So we built a prototype hybrid ray tracing engine to see what we could do with Metal Compute on the powerful new Mac Pro.
Now this toy city that we built looks simple, but we're using Metal to process over 1 billion rays per second at 4K resolution. Let me walk you through what we're doing here.
So we start by using Metal draw commands to render the geometry and material information that we're going to use later, and then switch to using Metal Compute and the MPS ray triangle intersection APIs to do all the heavy lifting. This includes calculating ambient light at every surface point, as you can see in this image. But also to simulate light bouncing between objects in our scene at increasing ray depth to generate shadows, reflections and even reflections within those reflections. And then we end by using the optimized MPS or optimize compute kernels in the new MPS de-noiser to produce this really high-quality image.
So traditional CPU renderers would take over a minute to generate a frame like this. With Metal, we've been able to reduce this to under 30 milliseconds, which is a staggering 1,000 times faster.
So pro app developers -- thank you. We think it's pretty great too. So pro app developers can now use Metal Compute to build new interactive tools to visualize these physically accurate lighting effects like these dramatic shadows that are cast by the buildings and also by that fire escape.
Or if we pan over here to this roof, the realistic way that green light bounces onto this neighboring building.
That just looks great. Thank you. Another great effect that we can simulate or model is accurate reflections, as you can see in the windshields of this bus. In fact, you can see the shadows moving in that windshield or in those reflections as I change the position of the sun. So that looks great, but animating objects in a ray trace scene can be very computationally expensive because we have to update the bounding volume hierarchy that's associated with the geometry.
Fortunately, with Metal Compute and the MPS APIs, we're able to move all of this work onto the GPUs and achieve this great animation.
And there go our trains.
So that was just an example of what's possible when you use Metal Compute for accelerated ray tracing on the new Mac Pro. It's a beast. Thank you. Back to you, Jeremy.
Thank you, Rav. So that's what we did in just a short bit of time. But high-performance ray tracing can be even more powerful in the hands of our most expert third-party developers. Which is why we are so excited that OTOY has announced they're using Metal Compute to build OctaneX, an all-new version of Octane Renderer, their interactive path tracing engine optimized for Metal and the Apple platforms.
And we are incredibly thrilled to be working with Maxon who's bringing their powerful GPU-accelerated renderer Redshift to the Mac with an all-new version optimized for Metal and the new Mac Pro. So with advanced Metal Compute APIs and incredibly powerful hardware, we've built Metal to power the most advanced professional content creation tools. And we've been working really closely with the leading app developers who have all announced that the upcoming versions of these professional content creation tools and apps will be fully optimized for Metal and the Apple platforms.
For instance, Serif has just announced an all-new version of Affinity Photo for Mac using Metal's graphics and Compute APIs to hypercharge their advanced photo processing engine in achieving stunning performance increases. More than 10 times better performance and jaw-dropping increases of 50 times better performance using Metal with multiple GPUs on the new Mac Pro.
So to enable these kinds of pro apps and this kind of performance, we work really closely with our GPU hardware and software partner teams to add all-new features to Metal. To support the new AMD Infinity Fabric link in the new Mac Pro, we added the Metal Peer Group API.
So what does this do? Well, previously sharing workloads across multiple GPUs would require moving large amounts of data in a round trip across the PCI bus. But with the Metal Peer Group API, apps can use multiple GPUs much more efficiently, directly sharing data across the Infinity Fabric link and without taking that long and scenic route through system memory.
Now finally, you've seen how you can use Metal Compute and the new Mac Pro to process a whole lot more pixels. But we also want you to produce even more beautiful pixels. So we've introduced the gorgeous new Pro Display XDR with all-new HDR software support in macOS. You can now use the AV Foundation APIs to decode HDR videos or you can render native HDR content directly with Metal. You can manage the HDR display tone mapping yourself or you can let the window system and our advanced display system software handle it all for you. And with these same APIs, you can also access a far greater range of brightness levels on many of our existing Mac displays as well. So that's our Metal update for today. It's even easier to use Metal across all of our platforms with Metal in the iOS Simulator and simplified GPU families. We have all-new features and powerful hardware to unleash all-new levels of GPU compute performance. And we built Metal to be the best GPU programming API to drive modern professional content creation tools and apps. So thank you very much. I'll hand it back to Sebastien now. Thank you.
Thank you, Jeremy. Don't you love Metal? Don't you love the power of Metal? Really, really amazing. Now what you've seen this afternoon is a huge amount of new technology that's new for all of you as developers. And what we've shown covering developer tools, the Apple platforms and core technologies is just some of the highlights. We actually have so much more to show you this week. And so ahead of us are 109 different sessions. And it turns out that that wasn't enough to cover everything. So this year we added an additional 27 video-only sessions.
And when you want to dive even deeper, you could sit down with some of the over 1,000 Apple engineers that are here at WWDC in 229 different lab sessions throughout the week.
So get out there and prepare to have your minds blown. It's going to be a great week. Thank you.
[ Applause ]
-
-
Looking for something specific? Enter a topic above and jump straight to the good stuff.
An error occurred when submitting your query. Please check your Internet connection and try again.