Quantcast
Channel: Artificial Intelligence News, Analysis and Resources - The New Stack
Viewing all articles
Browse latest Browse all 321

Open Source AI Is ‘Dangerous:’: Euro Cybersec Chief Warns

$
0
0

HELSINKI — Open source AI is “dangerous,” and even if a way could be found to hardwire in safeguards, open source dogmatists would not accept it, Finnish cybersecurity pioneer Mikko Hypponen warned this week.

Speaking at WithSecure’s recent Sphere conference here, the firm’s chief research officer said the tech world and society at large tended to overestimate the speed and underestimate the size of tech revolutions, and that was the case with AI.

Deep Fake Fears

He said that current fears about the use of deep fake AI were overblown, for at now least, though AI-based attacks were happening.

“We’ve seen the first examples of malware using large language models (LLMs) and of course, we’re seeing many of these deep fakes,” Hypponen told The New Stack. However, these typically feature deep fakes of celebrities in cryptocurrency scams.

There was evidence of “audio deep fakes,” he said, but these were pre-rendered. “Even those we haven’t seen in real time.”

When it came to targeted one-off scams using real-time deep fakes of real people, for instance, the technology was available. “But we have no evidence of that happening. It’s going to happen, but we haven’t seen it yet,” Hypponen noted.

But he also raised the possibility of “deep scams,” where criminals use AI to carry out, automate and run scams at scale. A traditional “romance scammer” can only juggle so many scams at once. However, AI potentially allowed criminals to automate the process and scale up.

Hypponen said that he had suggested to OpenAI in a meeting that it should simply ban romantic discussion to head off the industrialization of romance scams. But, he said, the firm already has paying customers producing “romantic books” or applications featuring virtual girlfriends and boyfriends.

Open vs. Closed Source

This raised the broader problem of open source versus closed-source AI models.

“I love open source, but I see limitations on how far we can take it in this space,” said Hypponen, noting that he had been at Helsinki University at the same time as one Linus Torvalds.

Closed source models tend to feature guardrails and restrictions on certain types of content, he said. ChatGPT’s usage policies, for example, report, “apparent child sexual abuse material” to the National Centre for Missing Children. It refuses requests to produce “phishing email templates.” However, arguably, these can be worked around.

Open source models also feature content filters while their licenses may carry restrictions on how they can be applied. The usage policy for Meta’s Llama prohibits using it for, amongst other things, “unlawful activity,” terrorism, sexual solicitation, and creating malware. Or spam.

But, Hypponen told The New Stack, he could “see no solution to the fact that you can always remove the safety and security restrictions if you have access to the code itself.”

Likewise, criminals tend not to pay too much attention to usage policies or licenses which restrict what they can do with a model.

Rogue Models

There are “rogue” models already out there, Hypponen said. Meta’s decision to open source its models under a “hybrid license” gave a hint of what could happen as rogue models fell into the hands of cybercriminals and other bad actors.

“A big part of these rogue large language models feed on Llama,” he said. Other large models were utilized by criminals, he added, but “mostly Llama because it’s the best of the open source ones.”

“The bad actors, they don’t care about the license, if they have the source code. That’s all they need.”

He posited the idea of a technical solution to keep open source models on track, “I could imagine some sort of a hybrid solution where you have part of the code open source, but then some sort of a guardrail application, which would always be closed source, which you would access through some online system.”

This would allow users to modify the things they really need to modify. “But you couldn’t change the security restrictions.”

Even if such a model could be evolved, Hypponen told us, “I’m not really sure if people would be happy with that. There’s a lot of people who are very, very much into open source who are almost religious about these things.”

It wasn’t clear that a regulatory approach could work either. “I’m not really a big fan of regulation altogether.”

In the meantime, malware carrying a large language model was “doable,” said Hypponen, but hadn’t been seen yet. However, WithSecure has seen malware that “carries functionality to call an API of a large language model.”

Automation of Malware

That’s all on top of AI’s automation and scalability to criminals, along with everyone else, and the fact that LLMs allow them to analyze code for vulnerabilities.

The full automation of malware campaigns “should have happened already, but it hasn’t.” So far, attackers still take a few days, or a few hours to spot that a domain has been blacklisted or an email has been categorized as phishing or scam.

“It’s going to happen in the near future. And when it happens, then we will have good AI against bad AI.”

But he said machine learning and AI had long been part of security firms’ armories. “And I believe we have an edge. I believe security companies are better at this because we’ve been doing it for years and years.”

The post Open Source AI Is ‘Dangerous:’: Euro Cybersec Chief Warns appeared first on The New Stack.

Finnish cybersecurity pioneer Mikko Hypponen warned that rogue models love to feast on Llama meat.

Viewing all articles
Browse latest Browse all 321

Trending Articles