OctoML makes it easier to put AI/ML models into production

2 years ago 121

OctoML, the well-funded machine learning startup that helps enterprises optimize and deploy their models, launched a major update to its product today that will make it far easier for developers to integrate ML models into their applications. With this new release, OctoML can now transform ML models into portable software functions that developers can interact with through a consistent API. With this, it’ll also be easier to integrate these models into existing DevOps workflows.

As OctoML founder and CEO Luis Ceze told me, he believes that this is a major moment for the company. Ceze, together with the company’s CTO Tianqi Chen, CPO Jason Knight, Chief Architect Jared Roesch and VP of Technology Partnerships Thierry Moreau, founded the company to productize TVM, an open-source machine learning compiler framework that helps ML engineers optimize their models for specific hardware.

Image Credits: OctoML

“When we started OctoML, we said: let’s make TVM as a Service,” Ceze said. “We learned a lot from that but then it became clear as we worked with more customers that AI/ML deployment is still too hard.”

He noted that as the tools to ingest data and build models improved over the last few years, the gap between what those models can do and actually integrating them into applications has only grown. So by essentially turning models into functions, that gap mostly disappears. This new system abstracts a lot of that complexity away for developers, which will surely help to bring more models into production. Currently, depending on whose numbers you trust, more than half of trained ML models never make it into production, after all.

Since OctoML already offers tools to make those models essentially run anywhere, a lot of those choices about where to deploy a model can now also be automated. “What sets us apart from any other solution is the ability to get the model for deployment, integrate it into the application — and then run on any endpoint,” Ceze said and noted that this is a gamechanger for autoscaling, too, since it allows engineers to build autoscaling systems that can move the model to CPUs and accelerators with different performance characteristics as needed. 

The models-as-functions capability is only one part of the company’s announcements today, though. Also new in the platform is a new tool that helps OctoML use machine learning to optimize machine learning models. The service can automatically detect and resolve dependencies and clean and optimize model code. There is also a new local OctoML command-line interface and support for Nvidia’s Triton inference server that can now be used with the new model-as-function service.

“NVIDIA Triton is a powerful abstraction which allows users to leverage multiple deep learning frameworks and acceleration technologies across both CPU and NVIDIA GPU,” said OctoMl CTO Jared Roesch. “Furthermore, by combining NVIDIA Triton with OctoML we are enabling users to more easily choose, integrate, and deploy Triton-powered functionality. The OctoML workflow further increases the user value of Triton-based deployments by seamlessly integrating OctoML acceleration technology, allowing you to get the most out of both the serving and model layers.”

Looking ahead, Ceze noted that the company, which has grown from 20 employees to over 140 since 2020, will focus on bringing its service to more edge devices including smartphones and, thanks to its partnership with Qualcomm, other Snapdragon-powered devices.

“The timing seems right because as we talk to folks that are deploying to the cloud, now they all say they have plans to deploy on the edge, too,” he said.

Read Entire Article