Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.
Intel today announced a strategic business and technology collaboration with Deci to optimize machine learning on the former’s processors. Deci says that in the coming weeks, it will work with Intel to deploy “innovative AI technologies” to the companies’ mutual customers.
Machine learning deployments have historically been constrained by the size and speed of algorithms and the need for costly hardware. In fact, a report from MIT found that machine learning might be approaching computational limits. A separate Synced study estimated that the University of Washington’s Grover fake news detection model cost $25,000 to train in about two weeks. OpenAI reportedly racked up a whopping $12 million to train its GPT-3 language model, and Google spent an estimated $6,912 training BERT, a bidirectional transformer model that redefined the state of the art for 11 natural language processing tasks.
Intel and Deci say the partnership will enable machine learning “at scale” on Intel chips, potentially enabling new applications of inference through reductions in costs and latency. Already, Deci has worked to accelerate the inference speed of the well-known ResNet-50 neural network on Intel processors, achieving a reduction in the models’ latency by a factor of 11.8 and increasing throughput by up to 11 times.
“By optimizing the AI models that run on Intel’s hardware, Deci enables customers to get even more speed and will allow for cost-effective and more general deep learning use cases on Intel CPUs,” Deci CEO and cofounder Yonatan Geifman said. “We are delighted to collaborate with Intel to deliver even greater value to our mutual customers and look forward to a successful partnership.”
Deci achieves runtime acceleration through a combination of data preprocessing and loading, selecting model architectures and hyperparameters (i.e., the variables that influence a model’s predictions) as well as datasets optimized for inference. It also takes care of steps like deployment, serving, monitoring, and explainability. Deci’s accelerator redesigns models to create new models with several computation routes, all optimized for a given inference device.
Deci’s router component ensures that each data input is directed via the proper route. (Each route is specialized with a prediction task.) As for the company’s accelerator, it works in synergy with other compression techniques like pruning and quantization. The accelerator can even act as a multiplier for complementary acceleration solutions such as AI compilers and specialized hardware, according to the company.
Deci was cofounded by Geifman, entrepreneur Jonathan Elial, and Ran El-Yaniv, a computer science professor at Technion in Haifa, Israel. Geifman and El-Yaniv met at Technion, where Geifman is a PhD candidate at the university’s computer science department. To date, the Tel Aviv-based company, a participant in Intel’s Ignite startup accelerator, has raised $9.1 million from investors including Square Peg.
VentureBeat
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
- up-to-date information on the subjects of interest to you
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
- networking features, and more
Become a member
The post Intel works with Deci to speed up machine learning on its chips – VentureBeat appeared first on abangtech.
source https://abangtech.com/intel-works-with-deci-to-speed-up-machine-learning-on-its-chips-venturebeat/
No comments:
Post a Comment