Quantcast
Channel: Artificial Intelligence News, Analysis and Resources - The New Stack
Viewing all articles
Browse latest Browse all 317

Inference Is Table Stakes. That’s a Good Thing for Ampere

$
0
0
closeup photo of poker chips. Because inference is table stakes for organizations using AI, the semiconductor company Ampere is focused on creating chips that work with cloud native tech that builds Ai apps and perform well at scale.

PARIS — Ampere, the maker of CPUs based on ARM architecture, is making its presence known, using inference as a big hook.

AI training is a batch job workflow, but inference is crucial in AI-focused application development. All applications eventually require inference to stay fine-tuned and updated.

The hook for Ampere? Cloud native is one of them, plus its performance and its take on the noisy neighbor problem that can come with virtual machines.

Ampere is a semiconductor design company started by Intel executives led by CEO Renée James. It makes chips for cloud services and companies building out their infrastructure. Its customers include all the major cloud providers, except Amazon Web Services, which has a similar technology, Gravitron.

The story for Ampere centers on open source and the ability to run any workload on its architecture without the hassle of using NVIDIA and integrating its CUDA library, which is the software needed to integrate GPUs with the software.

“So the focus is on enabling this entire open source ecosystem,” said Victor Jakubiuk, Ampere’s vice president of AI, told The New Stack at KubeCon + CloudNativeCon Europe, in Paris.

Inference Matters at Scale

Open source frameworks like PyTorch and TensorFlow run efficiently for inference on pure CPUs, said Jakubiuk. They optimize for inference specifically to ensure the code generated during the runtime for those AI models is optimal for its CPUs and can scale across multiple servers simultaneously.

The efficiency of inference matters at scale, Jakubiuk said.

“If, for AI training, you trained a model once it might be expensive,” he said. “But once you have that model, once you go into deployment, you’re essentially multiplying this by 10x, 100x, 1,000x, because you’re deploying this at scale. And the moment you multiply this by a factor of 1,000, any sort of inefficiency you might have multiplies 1,000 times. And the same time, any efficiency gain multiplies significantly.”

By combining software and hardware optimization, a customer can get much better performance per watt of energy provided to their data center and, therefore, much better total cost of ownership (TCO) for end users in the cloud.

There are three core use cases, Jakubiuk said:

  • Computer vision workloads: Anything that processes videos and images.
  • Recommendation engines: For example, ecommerce recommendation engines.
  • Large language models (LLMs): Processing text to generate text, or try to understand the text. Ampere has seen particular interest in open source models such as Mistral and Llama.

First, the raw performance of the CPUs, with their large number of cores, makes them suitable for use with LLMs. The next step is the TCO in terms of performance per watt. It’s an advantage that Jakubiuk said gives Ampere’s CPUs an advantage over GPUs. That makes a difference if you run your organization’s data center; power is an issue almost everywhere. Maxing performance becomes critical as data centers require lots of power.

Ampere CPUs run upwards of more than 128 cores, Jakubiuk said. They can run any workload without noisy neighbor problems, avoiding the performance throttling issues that x86 CPUs suffer from the issues that surface when running virtual machines. One virtual machine may become compute-intensive, and a second may be running a database or a heavy workload that slows down the x86 CPU performance due to heat and power issues. Ampere redesigned the CPU to avoid problems with noisy neighbors.

Ampere provides out-of-the-box inference, according to Jakubiuk. Models trained by GPUs run on Ampere, which recommends using TensorFlow or PyTorch. Three core AI frameworks run on Ampere’s CPUs: TensorFlow, PyTorch, and Onyx. They focus on enabling support for the open source community and from sources like Hugging Face and models built with VMs on Jupyter Notebook.

“Probably the two that are worth mentioning are LLama and Mistral because these are by far the most popular models,” Jakubiuk said. “They run with very good performance, and especially performance per watt. And as I said, for LLama, you can get up to 80% better performance per dollar spent versus running them on GPUs.”

Janakiram MSV, a longtime analyst and frequent contributor to The New Stack, said inference will become essential to application development, similar to the importance of APIs. Agents will emerge from inference, which will be developed with Retrieval Augmentation Generation (RAG).

And that will lead to a new emergence of agents for the cloud native community, which he said will come into focus this year and into 2025.

“Every observability company will have their own agent that can find anomalies, perform root-cause analysis and use this data to implement RAG,” Janakiram said.

Agents — the talk is all about agents. They’ll emerge in consumer technologies such as Google Generated Search and across the enterprise. This fits Ampere, as inference becomes table stakes for companies large and small.

The post Inference Is Table Stakes. That’s a Good Thing for Ampere appeared first on The New Stack.

The semiconductor company is focused on enabling cloud native AI app development, improving performance at scale and silencing "noisy neighbors."

Viewing all articles
Browse latest Browse all 317

Trending Articles