Quantcast
Channel: Artificial Intelligence News, Analysis and Resources - The New Stack
Viewing all articles
Browse latest Browse all 534

Apache Ray Finds a Home on the Google Kubernetes Engine

$
0
0

LAS VEGAS — In its ongoing efforts to make Kubernetes the de facto platform for large-scale AI/machine learning (ML) workloads, Google has struck up a partnership with Anyscale to offer a hosted version of one of the fastest-growing AI platforms, Apache Ray.

Apache Ray is a platform that provides a way to scale Python applications to run in distributed environments. It has found favor with AI system designers and data scientists for its ability to run large-scale AI inferencing jobs, especially for its ability to support GPUs.

An optimized version of Ray called RayTurbo will be available on Google Kubernetes Engine (GKE). The partnership was unveiled at the Google Cloud Next conference, being held this week in Las Vegas.

The partners expect that RayTurbo on GKE will process data 4.5 times faster and require 50% fewer nodes, compared to open source Ray.

The deeper integration between the two services will bring other benefits to the AI community as well, according to Gabe Monroy, vice president and general manager for cloud runtimes at Google, in a TNS interview. Ray users will enjoy faster startup times, high-performance storage for model weights, access to TPUs, and strong scalability, he said.

The partnership should also make it easier for existing Ray users to provision the necessary hardware to run their workloads, Monroy said.

“What has changed over the last couple years is that data processing is becoming more inference-heavy. This is where Ray really shines,” said Robert Nishihara, co-founder at Anyscale and one of the creators of Apache Ray, in a TNS interview.

Ray as the Distributed OS for AI

In fact, the partnership wants to establish the combination of Ray and Kubernetes as the de facto distributed operating system for AI workloads.

Engineers at both companies have been working to optimize Ray for Kubernetes, blending how Kubernetes scales at the cluster level with how Ray scales at the task level, in an open source project called KubeRay.

The software can support most AI and ML workloads — including model training and serving, batch inference, model learning, generative AI and LLM inference, and fine-tuning.

Ray explanation table.

Created in 2017, Ray can run Python in parallel, and understands most data types and model architectures. It can also run on GPUs and most other hardware accelerators out of the box, a distinct advantage in the AI space.

The software “makes it very easy to kind of build your AI models, train them and serve them as the unified framework,” said Anyscale CEO Keerti Melkote.

Anyscale has seen a 300% increase in compute hours of customers using RayTurbo in just the past three months.

One recent convert of Ray has been Amazon, which chose the platform over Apache Spark, a more general-purpose data processing framework, for large-scale table compaction chores, with Ray being 82% more efficient than Spark. The online giant also found Ray’s Pythonic interface easier to use for their data scientists over Spark’s SQL bias.

Likewise, OpenAI uses Ray for the underlying infrastructure and capabilities for the massive scale of computation and data processing required to train and run ChatGPT.

Other large-scale users of Ray include ByteDance, Uber, and Pinterest.

RayTurbo on GKE will be available later this year on the Google Cloud Marketplace. You can find more info here.

The post Apache Ray Finds a Home on the Google Kubernetes Engine appeared first on The New Stack.

Google Cloud and Anyscale have set out to make the Pythonic Apache Ray the de facto compute engine for AI and ML.

Viewing all articles
Browse latest Browse all 534

Trending Articles