AI at the edge.
Photo by Grzegorz Walczak on Unsplash
The future of machine learning is at the “edge,” which refers to the edge of computing networks, as opposed to centralized computing.
In a centralized machine learning network, users send data to a server, which makes a prediction, and sends that back to the user. This is slower, more expensive, less reliable, and less secure than edge computing, where predictions are made directly on the user’s device.
The problem with edge computing is that mobile and IoT devices are generally weak and low-powered, while AI models often have intense compute requirements.
Apple’s M1 chip is the answer.
The M1 is a breakthrough for machine learning at the edge, with the ability to execute 11 trillion operations per second, achieving up to 15x faster machine learning performance.
Using cutting-edge 5-nanometer process technology, the M1 is packed with 16 billion transistors. This doesn’t come at any cost to efficiency, and the latest Mac actually has up to 2x longer battery life.
M1’s Neural Engine
A lot of M1’s efficiency in AI computing owes to the neural engine, a type of NPU, or Neural Processing Unit. Unlike a CPU or GPU, this unit is focused on accelerating neural network operations like matrix math.
You’ve probably heard of another famous NPU out there: Google’s TPU, or Tensor Processing Unit.
Fast, efficient chips are quickly becoming a must-have, not a nice-to-have. The state-of-the-art language model, GPT-3, has 175 billion parameters, and naturally has intensive compute requirements for inference.
Centralized networks could simply be too slow to deploy these ever-heavier models. Devices with the M1 chip — currently the MacBook Air, Mac mini, and MacBook Pro — make the process of training and deploying AI models on-device much more feasible.
You may have noticed that the iPhone is not included in the list of devices with the M1 chip.
Instead, the iPhone 12 has what’s called the “A14 Bionic” chip, an 11.8-billion transistor powerhouse that has a fast neural engine, a new image signal processor, and 70% faster machine learning accelerators.
The post Apple's M1 Chip is Exactly What Machine Learning Needs – Medium appeared first on abangtech.
source https://abangtech.com/apples-m1-chip-is-exactly-what-machine-learning-needs-medium/
No comments:
Post a Comment