There’s nothing like having a $3 trillion company (Nvidia) as a close and valued partner. Think of having the biggest kid on your school football team as a best friend.
At JFrog’s annual DevOps and security conference, SwampUp2024, the Sunnyvale, Calif.-based company revealed new integrations for Nvidia Inference Microservices (NIMs), an easy-to-use set of microservices designed to speed deployment of generative AI models across the cloud, data centers and workstations.
The intersection of the JFrog software supply chain security suite and NIMs combines Nvidia’s GPU infrastructure with centralized DevSecOps processes in an end-to-end workflow. This allows developers and data scientists to bring machine learning (ML) models from the JFrog Artifactory into production fast and with full transparency and traceability, JFrog CEO and co-founder Shlomi Ben Haim told The New Stack.
“This is a big vote of confidence for JFrog, to be chosen by Nvidia as the secure partner to host their enterprise AI models,” Ben Haim said. “JFrog will provide the capability to natively host, curate and scan Nvidia’s AI models within the Artifactory repository. This partnership aligns with JFrog’s mission to support various types of binaries, including AI models, as part of the software supply chain.”
The integration is expected to produce multiple benefits, according to Ben Haim:
- Unified management: Centralized access control and management of NIM images and models alongside all other assets, including proprietary artifacts, in JFrog Artifactory to enable integration with existing workflows.
- Security and integrity: Continuous scanning at every stage of development, including containers, models and dependencies, delivering contextual insights across NIM models with auditing and usage statistics that drive compliance.
- Model performance and flexibility: Optimized AI application performance with accelerated engines prebuilt, offering low-latency and flexible deployment options via Artifactory, including self-hosted, multicloud and air-gap deployment options.
The integration between Nvidia NIM and the JFrog Platform proxies Nvidia NGC, using JFrog Artifactory with a single user experience coupled with high performance. Nvidia NGC is a hub for GPU-optimized software for deep learning, machine learning and high-performance computing (HPC). It provides a full-scale catalog of GPU-accelerated containers, pre-trained models and SDKs that are designed to simplify and accelerate AI workflows, Ben Haim said.
“As AI increasingly becomes a crucial strategic asset for organizations, it’s vital to empower developers and data scientists to focus on responsible innovation rather than wrestling with infrastructure and tool challenges,” Ben Haim said.
The post JFrog, Nvidia Hook up to Secure AI Systems at Scale appeared first on The New Stack.
New integration between the JFrog Platform and NVIDIA Inference Microservices (NIMs) couples the leader in AI services with enterprise DevSecOps best practices.