Concept: NY’s MLOps platform provider Spell has announced the launch of an operations platform that includes the tools needed to train AI models using deep learning algorithms. ML algorithms are tuned for the platforms currently used to train AI models. Deep learning algorithms require their own deep learning operations (DLOps) platform, according to Spell. The Spell platform uses tools developed by the company to assist enterprises to build and train AI models for computer vision and voice recognition applications that need deep learning algorithms to automate the entire deep learning workflow.
Nature of Disruption: The Spell platform is designed to handle the management, automation, orchestration, documentation, optimization, deployment, and monitoring of deep learning models across their full lifespan. Hundreds of trials with thousands of parameters across huge numbers of graphics processing units (GPUs) can be required to track and manage an AI model based on deep learning algorithms. While most existing MLOps platforms aren’t designed to handle deep learning algorithms, the Spell platform can be used to manage AI models that use ML algorithms. Although Spell does not provide any tools for controlling the lifespan of such models, data science teams can use the platform to integrate their own third-party framework. The Spell platform also saves money by activating spot instances, which cloud service providers make accessible for a limited period, whenever possible. According to the startup, this capacity can cut the entire cost of training an AI model by as much as 66%. This is crucial since developing AI models based on deep learning algorithms may cost millions of dollars in some situations.
Outlook: Most AI applications will eventually be built utilizing a combination of machine and deep learning techniques. Indeed, as the creation of AI models using ML algorithms becomes more automated, many data science teams will devote more effort to developing increasingly complex AI models utilizing deep learning algorithms. As GPUs placed in an on-premises IT system or accessed via a cloud service grow more inexpensive, the cost of creating AI models based on deep learning algorithms should continue to fall. In the interim, while AI model creation procedures can converge, it’s doubtful that typical DevOps-based techniques to manage application development processes will be extended to include AI models. The startup claims that the more linear methods used today to create and deploy traditional apps do not lend themselves to the continual retraining of AI models that are susceptible to drift. Regardless, all of the AI models that are being developed must eventually make their way into a production-ready application. Many companies are currently grappling with the problem of matching the rate at which AI models are produced with the quicker rate at which applications are now deployed and upgraded. It will only be a matter of time before every app — to various degrees — includes one or more AI models. The challenge now is to figure out how to decrease the amount of friction that happens when an AI model has to be installed into an application.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalData