We just received the new iPhone 11! We couldn’t wait to try out the performance of its Neural Engine, so we put together a small benchmark.
We run Deeplab with a MobileNet backbone, on a 513x513 image. It is a semantic segmentation network, returning a class for each pixel of the image. It can be trained to segment people, objects, animals, background… pretty much anything.
Running on mobile, it can lead to pretty interesting uses. We recently built a prototype to anonymize people in real time:
Previously, running a network like this in real time was impossible. But new chips and the focus of manufacturers towards AI make it increasingly accessible.
According to this specific test, the iPhone 11 is about 20% faster than the previous generation.
The real question is: what happened between the iPhone X and the iPhone XR 😲😲😲?!? Both devices contain a chip dedicated to neural networks: the Neural Engine (NE). However, on the iPhone X the chip was not available to third party apps.
Therefore, the iPhone XR/XS is the first generation of iPhone where developers can unleash the power of the Neural Engine. And it translates in a 3.5x jump in performance!
On the iPhone 11, the chip was simply updated, leading to increased performance but not such a dramatic jump.
Details and limitations
This benchmark represents a single datapoint, in an unoptimized app. Performance heavily depends on the deep learning model used. YMMV. You can find the code for this benchmark here on GitHub.