Streaming is available in most browsers,
and in the Developer app.
-
Introducing the Create ML App
Bringing the power of Core ML to your app begins with one challenge. How do you create your model? The new Create ML app provides an intuitive workflow for model creation. See how to train, evaluate, test, and preview your models quickly in this easy-to-use tool. Get started with one of the many available templates handling a number of powerful machine learning tasks. Learn more about the many features for continuous model improvement and experimentation.
Resources
Related Videos
WWDC20
- Build an Action Classifier with Create ML
- Build Image and Video Style Transfer models in Create ML
- Control training in Create ML with Swift
WWDC19
- Advances in Natural Language Framework
- Building Activity Classification Models in Create ML
- Creating Great Apps Using Core ML and ARKit
- Training Object Detection Models in Create ML
- Training Recommendation Models in Create ML
- Training Sound Classification Models in Create ML
- Training Text Classifiers in Create ML
- What's New in Machine Learning
-
Download
This year, our Create ML release has new models and a whole new workflow.
Every workflow starts with input. The devices that we develop for, use, and love, are rich with information from different sensors such as a camera, the microphone, the keyboard, Accelerometer, even Gyroscope. And all of these types of input can be used so you can train machine learning models and make your apps more personalized and more intelligent.
Last year, we supported three of these types of input; images, text, and tabular data.
This year, we're increasing the set of available domains from three to five. And now, introducing Activity and Sound. We're also expanding the breadth of available models within all of them. And all of this is being brought into a new medium. We call it Create ML app.
It all starts with identifying your domain.
And once you filter by your input, you can see all of the available model types you can create.
Create ML then breaks down the task of creating machine learning model into three simple phases; input, training, and output. This new environment changes the way you interact with machine learning on the system. The app then displays an overview of your data and a rich set of analytics as it trains. You can visualize your model's progress in an easy to understand graphical view that displays the accuracy of your model across iterations. You can also see a breakdown of the precision and recall values for each class your model was trained on. And the table is interactive so you can filter by class or by percentage to better understand your model's performance. The process of testing is as simple as dragging and dropping in new data.
You can have the ability to train and retest just by clicking the Test button.
And what's really unique about Create ML is you can easily preview trained models in the Output tab. This new feature allows you to see exactly how your model will predict without having to incorporate your models into your apps.
This means no more waiting to deploy them. You can do this directly in Create ML, saving you time to perfect your models instead. The Preview section is also custom built for every template, ensuring a complete end-to-end experience for every task.
But what better way to see this than to show it to you live? Let's take a look. Now, over in Xcode, I've been working on a flower classifier app to help me identify different types of flowers. And I've been using a model that I downloaded from the Model Gallery called Resnet50. And this is a well-known image classifier model that's been trained to recognize 1,000 different classes.
But this model is taking up about 100 megabytes of my app. And we can see when I actually try it on different images it knows that this Hibiscus is a flower, but it doesn't know the exact type. So, it doesn't quite suit my needs. Now, what I can do is from Xcode I can open Developer Tools and launch Create ML.
This then prompts me to create a new document. And we can see I'm launched into the Template View where I can see all the available models we can create. Since I'm working with images, I'll filter by image and select the Image Classifier.
We can then name this.
And then, pick the place to save it so we can come back to it later. And then, launch into the New App view. And you can see that I'm, I'm first prompted to drag in input as training data. And if I progress through some of the other tabs, they're not unlocked yet because I haven't gone through the flow.
So, let's try this sequentially. On my desktop, I set aside some images of different types of flowers. And you can see, I have some hibiscus, some passionflower, and then even some roses, dahlias, and daisies.
I could take this folder and drag it in and immediately see that I have 65 different images contained within it and they're across five different classes. Now, since we have our input data, I can hit the Run button and automatically this model begins training. It first starts by extracting features from the images. And then, we can see the progress as training begins.
I can see a breakdown of how the model's performing on this data. But what I'd really like to do is see how the model performs on new data that it hasn't seen. So, I'll navigate to the Testing tab and I'll drag in these new flowers that I've set aside and hit Test.
Now, what I'd really like to do is take a look at the trained model in the Output tab. And we can see here I have a flower classifier that's 66 kilobytes. Now, to actually preview this, I've taken some other photos and perhaps this hibiscus that wasn't working before with Resnet.
And I can see now, this model can correctly predict exactly what it is. And I can see other predictions the model has made and confidence values for each one. I can even take a full folder and drag them in and debug any image that this model has predicted on.
Now, once I'm satisfied, I can take this model and drag it out.
And then, reintegrate that into my app.
But what's more, is you can also leverage the power of Continuity Camera in Create ML. And from this, what I can do is I can import from my phone which is attached. And I can try to take a photo of this flower that I have here on stage, and we can see how it does. And it actually turns out okay.
Now, if you're not happy with your performance or if you'd just like to run some more experiments, there's a Plus button here, as well. And what you can do is you can select more training data. You can toggle how our validation data is specified and even your testing data, as well.
This time, I might want to tweak some augmentations. And then, I can hit Run to train.
And that's a look at the new workflow in Create ML. Let's go back to the slides.
So, you've seen Create ML gives you a whole new way to train your custom machine learning models on the Mac, with a beautiful look and feel. And with additions like metrics visualization, live progress, and interactive preview, Create ML app sets the bar for a great model training experience.
In the demo, we walked through just one model that you can create with Create ML. But this release, we're introducing nine. So, let's take a high-level tour of all of the models you can create and some examples of sample apps.
Starting with the image domain, we have the Image Classifier and Object Detector.
An Image Classifier can be used for categorizing images based on their contents. For example, the Art Style identifier uses a custom Image Classifier to determine the most likely movement of a piece. It then separately provides an overview of notable artists to complete the app experience.
In Create ML, the Image Classifier leverages core Apple technology by performing transfer learning on top of the vision feature print model already in the OS. This allows you to benefit from faster training times and a reduced model size in your app. Also, you have the option to train with augmentation to make your models more robust to unseen input.
If you instead want to identify multiple objects within an image instead of one single one, you might want to create an Object Detector.
Object Detectors can be used for localizing and recognizing contents within an image. For example, these can be trained to detect one or more classes such as specific playing cards or the exact suits contained on them.
The Object Detector is a deep learning-based model and performs data augmentation to make it more robust. And it does this entirely on your Mac's GPU. Our next domain is Sound.
Within Sound, we have a new model called the Sound Classifier.
This model allows you to determine the most dominant sound within an audio stream.
Since audio data is time series based, you could differentiate between the start and end points of different sounds such as when the guitar solo ends and the crowd goes wild.
This model leverages transfer learning so you can also experience faster training times. And we also understand that optimizing performance across complex applications is challenging. Which is why we want to emphasize that these models are lightweight and run on the Neural Engine, making them ideal for real-time applications on any device. Our third domain is Activity. And within Activity, we have the Activity Classifier for the first time.
This model also operates on time series-based data. And Activity Classifiers can be trained to categorize contents of motion data from a variety of sensors such as the Accelerometer and Gyroscope.
These models are deep learning-based and train on the GPU. And they result in small model sizes that are also ideal for deployment on any device.
Our second to last input is Text. Within Text, there are two model types available; the Text Classifier and the Word Tagger.
Text classification can be used to label sentences, paragraphs, or even entire articles based on their contents. You can train these for custom topic identification or categorization tasks.
In Create ML, there are a variety of different algorithms for you to try and even a new transfer learning option this year. The Word Tagger is slightly more nuanced. It's ideal for labeling tokens or words of interest in text. General purpose examples of this are things like tagging different parts of speech or recognizing named entities, though you could customize your own to do things like tag cheeses. With a Cheese Tagger, you could be sure to identify different flavor notes from any cheese description. Our last domain is the most general of all five; Tabular Data.
And within this, we have three model types; the Tabular Classifier, Tabular Regressor, and Recommender. Classifiers are for categorizing samples based on their features of interest. And features can be a variety of different types such as integers, doubles, strings, so long as your target is a discrete value.
These allow you to do things like determine if a seat is comfortable or not based on a particular person's height and weight and specific properties of the seat such as the amount of leg room.
What's unique about the Tabular Classifier is it extracts away the underlying algorithm for you. And identifies the best multiple classifiers for your data.
If you instead want a model that will predict a numeric value, such as a rating or a score, you may instead want to use the Tabular Regressor.
This model quantifies samples based on their defining features. So, you could train one to estimate the price of a house based on its location and number of bedrooms, bathrooms, and parking spaces.
Like the Classifier, the Regressor also automatically detects the best of multiple regressors for your data. Though, you still have the flexibility to choose a specific boosted or decision tree, random forest, or linear regressor, if you like. And our last model is the Recommender which allows you to recommend content based on user behavior.
The Recommender can be trained on user-item interactions with or without ratings. And it's able to be deployed on device, saving you the hassle of setting up the server. As we saw, Create ML comes with great features to help you set up your machine learning experiments, use native support for data visualization, metrics, and performance. And the machine learning models you train can easily be saved and shared with members of your team.
Of course, all of this leverages the power and efficiency of training on your Mac.
This new medium augments other ways you have available to you to create machine learning models such as in Swift Playgrounds, Swift Scripts, or Swift Frepple , Xcode Playground. But allows you to do so without writing a single line of code. We believe this brings machine learning to everyone.
To summarize, in Create ML this year, you have new models, nine templates, and a whole new workflow. [ Applause ]
-
-
Looking for something specific? Enter a topic above and jump straight to the good stuff.
An error occurred when submitting your query. Please check your Internet connection and try again.