Core ML brings machine learning to Apple developers

Ngoc Huynh

Apple’s Core ML frameworks provide a standardized — if limited — way to embed machine learning into Mac and iOS apps.

Earlier this week Apple unveiled Core ML, a software framework for letting developers deploy and work with trained machine learning models in apps on all of Apple’s platforms—iOS, MacOS, TvOS, and WatchOS.

Core ML is intended to spare developers from having to build all the platform-level plumbing themselves for deploying a model, serving predictions from it, and handling any extraordinary conditions that might arise. But it’s also currently a beta product, and one with a highly constrained feature set.

Core ML provides three basic frameworks for serving predictions: Foundation for providing common data types and functionality as used in Core ML apps, Vision for images, and GameplayKit for handling gameplay logic and behaviors.

Each framework provides high-level objects, implemented as classes in Swift, that cover both specific use cases and more open-ended prediction serving. The Vision framework, for instance, provides classes for face detection, barcodes, text detection, and horizon detection, as well as more general classes for things like object tracking and image alignment.

Apple intends for most Core ML development work to be done through these high-level classes. “In most cases, you interact only with your model’s dynamically generated interface,” reads Apple’s documentation, “which is created by Xcode automatically when you add a model to your Xcode project.” For “custom workflows and advanced use cases,” though, there is a lower-level API that provides finer-grained manipulation of models and predictions.

Because Core ML is for serving predictions from models, and not training models themselves, developers need to have models already prepared. Apple supplies a few example machine learning models, some of which are immediately useful, such as the ResNet50 model for identifying common objects (e.g. cars, animals, people) in images.

The most useful applications for Core ML will come by way of models trained and provided by developers themselves. This is where early adopters are likely to run into the biggest snags, considering models will have to be converted into Core ML’s own model format before they can be deployed.

Apple has provided tools for accomplishing this, chiefly the Coremltools package for Python, which can convert from a number of popular third-party model formats. Bad news: Coremltools currently supports only earlier versions of some of those models, such as version 1.2.2 of the Keras deep learning system (now in version 2.0). Good news: That toolkit is open source (BSD-licensed), meaning it should be relatively easy for contributors to bring it up to speed.

Core ML is limited in other ways. For instance, there are no provisions within Core ML for model retraining or federated learning, where data collected from the field is used to improve the accuracy of the model. That’s something you would have to implement by hand, most likely by asking app users to opt in for data collection and using that data to retrain the model for a future edition of the app.

It’s entirely possible that features like this could surface in future revisions of Core ML, once developers get used to the basic workflow involved and become comfortable with Core ML’s metaphors. A standard methodology for using trained machine learning models in Apple apps is a good start for developers, but making it easy to transform user interactions with those models into better intelligence over time would be even more appealing.

Share the news now

Source : http://www.infoworld.com