Quantcast
Channel: Artificial Intelligence News, Analysis and Resources - The New Stack
Viewing all articles
Browse latest Browse all 317

From VMs to AI: How Edge Computing Is Evolving

$
0
0
Featured image for "From VMs to AI: How Edge Computing Is Evolving." Image of the Earth from space

To quote Aerosmith: We’re living on the edge. Enterprises and organizations around the world are now comfortable with the concept of edge computing, whether that means tracking events on a factory floor or embedded in the retail stores where we shop, Earth is now covered in edge computing.

The expansion of edge computing insinuates a few areas of growth for businesses using this technology. First, it requires sensors or Internet of Things (IoT) devices. By extension, this implies the second item: more data to process. The whole purpose of most edge computing initiatives is to gather, monitor and analyze data quickly and locally. Those factory components show their age in their efficiency (or failure rate), and customers can get specialized offers or new experiences that are otherwise impossible without localized resources.

This all begs the question: What now? Now that the sensors are all in place and the data is flowing, what’s next for edge? The answer is definitely in the headlines, but rather than being a buzzword add-on, edge is the perfect place to run AI applications. After all, AI is great for analyzing data, and the edge definitely produces lots of data to analyze.

So how do AI and edge come together? The answer actually takes a detour through some technology with which you may already be quite familiar: virtual machines (VMs).

Challenges at the Edge

What makes edge computing unique? Because it’s not in the confines of a data center, there are some variables, in addition to all the everyday challenges. Some of these include:

  • Non-standard servers: Sometimes there isn’t room for a half rack (or even 4U node).
  • Unreliable network connectivity: Not every location has consistent, fast networking.
  • Inconsistent power: Even electricity isn’t guaranteed to be on 24/7.
  • Nonexistent HVAC: Some places are hot, cold, wet or dusty (not great for hardware).
  • Mountainous data: The amount of device and sensor user data that’s generated can be gigabytes.

These are just a few examples of what you can expect at the edge. Getting these problems ironed out are, of course, the basis for keeping that data flowing.

Challenges at Home Today

Technology is always changing, but one constant for IT teams has always stayed the same: balancing the exciting new things to drive revenue in the future, while still maintaining all the old things that make money today.

Applications come in many shapes and sizes — big/small, virtualized/containerized/bare metal and now near/far. It truly is a mixed-bag across many organizations — after all, who has the resources to rewrite every application? Even with all the benefits of containers, VMs are still extremely popular and will continue to stay for years to come.

Using a hybrid solution allows for all the benefits of modern containerized infrastructure to be brought to bear upon the virtual machines out in the field. For all those mixed environments that run varying types of applications, a hybrid solution lets your VMs and containers run side by side, using the same tools and processes, and allowing teams to manage all their apps, including bare metal, together on a single application platform.

Challenges at Home Tomorrow

It’s hard to find a recent tech blog that doesn’t mention AI, and this is no exception. Now that all of our existing applications have a home, let’s look ahead to some new use cases using AI that can deliver new insights or decisions. Whether it’s quality control in a factory, special offers in retail or onboard operations in transportation, intelligent insights that are fast, local and private can bring real value to existing industries.

No matter your thoughts on AI, there are fundamental, undeniable things needed to build out an AI workflow, development process and ultimately, an application of the AI. Those requirements aren’t even terribly shocking: If you’ve seen one AI workflow graph, you’ve pretty much seen them all. The only differences tend to be the actual software implementations. But the workflow is generally the same, and not unlike a traditional software development workflow, if you’re a little generous with your definitions.

  • Data exists.
  • A bit of software analyzes that data and produces some form of output based upon it.
  • The validity of the output is assessed.
  • The software is updated, the data is updated.
  • Go to Step 1.

Software development is quite similar in that it is also a feedback loop. The more software written, the more data is gathered, the more the output can be assessed, the better the software becomes. It’s a self-perpetuating cycle, but without a steady flow of data and software innovation, the process can become stagnant.

Thus, it is incumbent upon practitioners to keep that data flowing into the cluster. That’s why Kubernetes is the heart of the AI revolution: It allows all that infrastructure to exist in the same mental space for everyone involved.

Adding virtual machines to edge locations only enhances the power of the platform. With those edge systems ingesting data on site, sometimes requiring apps that are older than some of the developers, virtual machines cover legacy workloads for those distributed apps that aren’t necessarily cloud native. Red Hat OpenShift lets organizations use both those new AI applications and legacy VMs, all in one environment, with one set of tools in one place.

The post From VMs to AI: How Edge Computing Is Evolving appeared first on The New Stack.

Kubernetes is the heart of the AI revolution, allowing both legacy VMs and new AI applications to exist in a cohesive environment.

Viewing all articles
Browse latest Browse all 317

Trending Articles