看看Apple Core ML提供了什麼

Build more intelligent apps with machine learning.

Take advantage of Core ML, a new foundational machine learning technology used across Apple products, including Siri, Camera, and QuickType. Core ML delivers blazingly fast performance with easy integration of machine learning models enabling you to build apps with intelligent new features using just a few lines of code.

Core ML是什麼?

簡單說就是一句話:With Core ML, you can integrate trained machine learning models into your app.

具體一點:Core ML is the foundation for domain-specific frameworks and functionality. Core ML supports Vision for image analysis, Foundation for natural language processing (for example, the NSLinguisticTagger class), and GameplayKit for evaluating learned decision trees. Core ML itself builds on top of low-level primitives like Accelerate and BNNS, as well as Metal Performance Shaders.

Core ML is optimized for on-device performance, which minimizes memory footprint and power consumption. Running strictly on the device ensures the privacy of user data and guarantees that your app remains functional and responsive when a network connection is unavailable.

Core ML怎麼用呢?

第一步,獲得Core ML Model(Getting a Core ML Model)。

具體來講有兩種方法:

1. 使用Core ML提供的Model,目前網站上可以下載的如下:

2. Converting Trained Models to Core ML:Converting Trained Models to Core ML

If your model is created and trained using a supported third-party machine learning tool, you can use Core ML Tools to convert it to the Core ML model format. Table 1 lists the supported models and third-party tools.

具體轉換的方式也很簡單:Convert your model using the Core ML converter that corresponds to your model』s third-party tool. Call the converter』s convert method and save the resulting model to the Core ML model format (.mlmodel).

不支持TensorFlow?

不過,Core ML還支持自己編寫轉換工具:

It"s possible to create your own conversion tool when you need to convert a model that isn"t in a format supported by the tools listed in Table 1.

獲得模型之後就是將模型集成到你的應用之中了(Integrating a Core ML Model into Your App)

具體包括如下工作:

  • Adding a Model to Your Xcode Project
  • Creating the Model in Code

  • Getting Input Values to Pass to the Model

  • Using the Model to Make Predictions

  • Building and Running a Core ML App

這樣就可以構建你的應用了。

如果你還需要比較高級的功能,可以使用Core ML API | Apple Developer Documentation

You can use Core ML APIs directly in cases where you need to support custom workflows or advanced use cases. As an example, if you need to make predictions while asynchronously collecting input data into a custom structure, you can use that structure to provide input features to your model by adopting the MLFeatureProvider protocol.

下面我們看看Core ML的底層支持

Core ML itself builds on top of low-level primitives like Accelerate and BNNS, as well as Metal Performance Shaders

Accelerate

Make large-scale mathematical computations and image calculations, optimized for high performance.

Metal Performance Shaders

Add low-level and high-performance kernels to your Metal app. Optimize graphics and compute performance with kernels that are fine-tuned for the unique characteristics of each Metal GPU family.

BNNS

Basic neural network subroutines (BNNS) is a collection of functions that you use to implement and run neural networks, using previously obtained training data.

Creating a Neural Network for Inference

The Accelerate framework』s new basic neural network subroutines (BNNS) is a collection of functions that you can use to construct neural networks. It is supported in macOS, iOS, tvOS, and watchOS, and is optimized for all CPUs supported on those platforms.

BNNS supports implementation and operation of neural networks for inference, using input data previously derived from training. BNNS does not do training, however. Its purpose is to provide very high performance inference on already trained neural networks.

This document introduces BNNS in general terms. For further details, consult the header file bnns.h in the Accelerate framework.

Structure of a Neural Network

... ...

BNNS provides functions for creating, applying, and destroying three kinds of layers:

  • A convolution layer, for each pixel in an input image, takes that pixel and its neighboring pixels and combines their values with weights from the training data to compute the corresponding pixel in the output image.
  • A pooling layer produces a smaller output image from its input image by breaking the input image into smaller rectangular subimages; each pixel in the output is the maximum or average (your choice) of the pixels in the corresponding subimage. A pooling layer does not use training data.
  • A fully connected layer takes its input as a vector; this vector is multiplied by a matrix of weights from training data. The resulting vector is updated by the activation function.

整體來講,Apple提供的Core ML已經是一個比較完善的實現方案了,相信後面以後的App開發都會考慮加入基本的ML功能。我比較好奇的還是性能問題,在現有硬體上到底能做到什麼程度?相信隨著應用越來越廣,對專用硬體的需求也會越來越強烈。

T.S.

歡迎關注我的微信公眾號:StarryHeavensAbove

題圖來自網路,版權歸所有者所有


推薦閱讀:

蘋果公司及其產品的魅力是什麼?
如何評價2016年3月22日的蘋果發布會及其新產品?
如何看待2015秋季蘋果發布會上出現小米2(/2S)?
為什麼蘋果產品的更新力度變小了?

TAG:机器学习 | 苹果产品发布会 | 人工智能 |