
DevOps platform maker JFrog, the first company to develop a binary code management repository for developers, revealed Tuesday that it has acquired Tel Aviv-based Qwak to add AIOps and MLOps capabilities to its platform for building and securing AI and machine learning applications.
JFrog is the single system of record for all software packages (binaries), including models stored in Artifactory, which has been Qwak’s model registry of choice for several years.
Qwak’s software enables machine learning models to be hosted, scanned and monitored by the Xray security tool in JFrog’s build platform. The acquisition also aims to improve DevSecOps by implementing a “reverse-security” process to protect models from malicious attacks.
JFrog’s idea to centralize DevOps, DevSecOps and MLOps in one package stands alone in the business and will enable users to speed up modeling from development to deployment, the company’s co-founder and CEO Shlomi Ben Haim told The New Stack.
“We are the first to combine DevOps, DevSecOps and MLOps together on one platform, and we are very honored and looking forward to serving our customers,” Ben Haim said. “That’s our primary target.”
The most significant result of combining JFrog with Qwak’s IP, he said, is the unification of machine learning model development and deployment processes within the broader software development life cycle.
Qwak’s management platform is designed specifically for machine learning models in production. It enables all relevant stakeholders to observe, analyze and manage their ML models – regardless of how they were developed, deployed or hosted. Qwak’s full-stack platform, Ben Haim said, frees data science teams from infrastructure concerns, enabling them to focus on building and deploying tailored AI and ML solutions without complex set up.
The new combination uses Qwak’s advanced model training and serving capabilities to manage the previously siloed and complex life cycle of models, alongside model storage management and security scanning of models provided by JFrog Xray.
Defining ‘Reverse-Security’ Processes
A “reverse-security process” refers to strategies that focus on understanding and mitigating threats by analyzing attacks and vulnerabilities the way hackers see them, Ben Haim said.
“The attacker cannot really penetrate your organization if you don’t leave the door open,” he said. “If you leave the door open, it can only be the runtime environment in the production environment.
“And what do you have in the production environment? What do you have in the runtime environment? You don’t have your source code; you have the binaries; you have the software packages.”
What’s been built in JFrog, he added, “is actually a ‘reverse-security’ process, thinking with the hacker in mind. Such as: What do you want to see and how do you want to protect it, all the way to the developer? How do you build the traceability, and how do you build the protection throughout the entire software supply chain on all the quality gates?”
In that case, large language models (LLMs), “just like any other package, just like any other binary, will find themselves in the production environment. And if you bought it with malicious door openers, then the hacker will wait for it. And this is where the JFrog platform is today. It’s got thousands of customers.”
Misuse of LLMs Becoming a Major Issue
While LLMs themselves aren’t being infected, their misuse for malicious purposes poses a significant threat to cybersecurity. There is a concerning trend of malicious actors leveraging the capabilities of LLMs to enhance and accelerate cyberattacks.
For example:
- LLMs are being used to generate malicious code, including malware variations and polymorphic code that can evade detection by traditional security tools. They can also help obfuscate existing malware, making it harder to analyze and reverse-engineer.
- LLMs can craft sophisticated phishing emails, social engineering messages, and even deepfake audio or video content to deceive victims into divulging sensitive information or downloading malware.
- Malicious actors can use LLMs to generate large volumes of misleading or false information, sowing discord, manipulating public opinion, and undermining trust in institutions.
In consideration of this trend, is JFrog using the old concept of honey pots to attract hackers to security LLMs and nail them before they do their dirty business?
“We are doing it as part of the security research team, and part of what we are trying to find is that hackers will bite, but they are very smart,” Ben Haim said. “They are not biting that early. So you put it out and you wait and you wait and you wait, and once it happens, you save the entire community. JFrog together with Docker helped publish an article earlier this year saying that we found over 4 million containers at Docker hubs with malicious content.
“But this is the story of MLOps expanding the JFrog platform. This is our breakfast and dinner. This is what we do for a living. We deal with binaries, and models are yet another binary for us.”
The integration of the two platforms is ongoing and expected to be completed this calendar year. The acquisition follows an integration of JFrog and Qwak solutions announced earlier this year, based on JFrog’s “model as a package” approach. This is a holistic solution aimed at eliminating the need for separate tools and compliance efforts that will offer full traceability in a single solution, Ben Haim said.
The post JFrog Combines DevOps, DevSecOps, MLOps with Qwak Buy appeared first on The New Stack.
JFrog's solution will now enable building, deployment, management and monitoring of AI workflows to classic ML models, all on a unified platform.