With an interface inspired by ChatGPT, LibreChat offers an “enhanced” ChatGPT clone, an open source alternative with AI model selection as a main feature — while also providing additional features and customization options (including plugins for things like Retrieval-Augmented Generation).
On the Practical AI podcast, LibreChat creator Danny Avila said he’s especially proud of how the software offers the ability to search through all your past conversations. “To this day, it’s not a feature on ChatGPT or many different interfaces… I know a lot of people, just that one simple feature gets them on board.”
LibreChat’s site brags that it’s “a centralized hub for all your AI conversations,” promising to “harness the capabilities of cutting-edge language models from multiple providers in a unified interface” (including OpenAI’s models but also other open-source and closed-source models).
Its website boasts of “seamless integration” with AI services from OpenAI, Azure, Anthropic, and Google. (Later it specficially cites GPT-4 Claude and Gemini Vision.)
It’s available in multiple languages, supporting images and files, and “I think down the pipeline we’ll see integrations with videos,” Avila said on the podcast. “But right now, we’re handling vision with images.”
And of course, it’s free and “completely open-source,” according to the repository, with “community-driven development.” (It’s released under the MIT License.)
But for all its features, LibreChat also offers something else. It’s a powerful reminder of the passionate community that’s actively creating an ecosystem of open source AI tools…
Data Sovereignty
In an email interview with The New Stack, Danny Avila explained what’s useful about hosting your own AI chat interface. For LibreChat’s users — and for himself — it’s owning your own data, “which is a dying human right, a luxury in the internet age and even more so with the age of LLM’s.”
He’s noticed the way big sites like Reddit have revised their policies on web scraping and imposed limitations on API usage, with some companies actually requiring users to allow access to their data. And it’s not just social media companies. “The recent ‘free’ release of GPT-4o says it all,” Avila argues. As he sees it they want users not just for popularity, “but for the immediate influx of data that OpenAI is ‘paying’ for… As big tech continues to scale compute for AI training, it’s clear to me that data is a hotter commodity than oil.”
With locally-hosted LLMs, Avila sees users finally getting “an opportunity to withhold training data from Big Tech, which many trade at the cost of convenience.” In this world, LibreChat “is naturally attractive as it can run exclusively on open-source technologies, database and all, completely ‘air-gapped.'” Even with remote AI services insisting they won’t use transient data for training, “local models are already quite capable…” Avila notes, “and will become more capable in general over time.”
And they’re also compatible with LibreChat.
On the podcast, Avila even demonstrated its ability to ingest a CSV spreadsheet with mock sales data — while still switching to different LLMs. “I switched to Cohere, and it didn’t give as good of a response as GPT-4, but it was able to see that and work with it.”
He also demonstrated how LibreChat uses a database tracking “conversation state” — making it possible to switch to a different AI model in mid-conversation.
Down the line, Avila tells the podcasters, he’s even considering the possibility of preconfiguring LibreChat to switch to the best AI for particular tasks — either by creating a kind of “smart router,” or letting users do the preconfiguring themselves.
But ultimately Avila was inspired by the day ChatGPT leaked the chat history of some of its users back in March of 2023 — and his shock that ChatGPT could err on such a basic component of privacy. “That surprised me,” Avila said on the podcast, “but it really planted the seed for what I wanted to see.”
Avila then described LibreChat as “inherently completely private,” but “with the flexibility of having remote stuff in there too, which I think is important.”
Advantage: Open Source
Among its advantages, LibreChat is easily deployed and can serve many different AI instances to multiple users. But on the podcast, Avila also discussed LibreChat’s plugins (“where you can make requests to DALL-E or Staple Diffusion for image generations, you can search archive papers — things like that.”) The repository explains that plugins let users “tailor the platform to their specific needs.”
Avila told the podcasters that LibreChat has “such a rich environment now — for getting only JSON responses or being able to use tools with Anthropic… I’ve got a lot of things planned there, where I want to see just that tool environment really grow.” He also wants to make it easier to quickly add home-grown tools for your own private data sets.
It’s not the only customization option. (Users can also create and share their custom presets.) But maybe that’s one of the advantages of an open source project. Podcast co-host Chris Benson (a principal AI strategist at Lockheed Martin) pointed out that there were 117 contributors on GitHub [rising to 130 by May 24].
And Avila acknowledged gratefully that it’s been an open source success story, spontaneously drawing both contributions and collaboration. “I think that’s really what’s helped the project explode too — that there’s such an openness to what people want to see in it.”
The podcasters asked Avila if he’s seen any famous organizations using LibreChat? “For sure,” he answered with a laugh. “I caught wind of Mistral using the app just to prototype their chat interface….” he said, adding later that “there have been people within Microsoft who are just helping people prototype their own interfaces and things like that… I’m just kind of blown away.”
OpenAI Assistants — And Looking Ahead
Recently Avila wrote that the current plugin system “will be discontinued in favor of something more modular, including using different LLM providers.” In fact, part of LibreChat’s appeal is its continuous updates — and its ability to integrate new functionalities from the rest of the AI ecosystem.
“With MLX, you can now enjoy the benefits of running large language models locally on your Apple Silicon hardware,” explains a May blog post explaining how to configure LibreChat to use MLX.
May 14 saw a fresh release candidate with early container image builds for supporting OpenAI’s GPT-4o (and other fixes).
Their integration with OpenAI’s Assistants API is already live on the main branch, Avila told The New Stack, and has “seen a lot of success and usage.”
Avila thinks many people don’t realize that ChatGPT is already “agentic under the hood” — that is, offering a collection of powerful goal-directed helpers capable of autonomous decision-making. And in addition, ChatGPT “continues to grow in this area,” creating a challenge for the open source world to try to keep up. “We can get really close with OpenAI’s dev tools, but reaching parity with open-source or competing counterparts is a big concern for me… I’m gearing up for development on this front.”
Looking ahead, Avila told The New Stack that he sees LibreChat expanding its agent support to non-OpenAI endpoints (both open-source offerings like Ollama and closed-source ones like Anthropic and Google). “People really want the ‘ChatGPT’ experience but with any AI,” Avila says — “where you can casually talk and have helpers run tasks for you, whether it’s using an API for data analysis or generating marketing materials.”
Avila says LibreChat is now “iterating” on their successful integration of OpenAI agents — but he’s also working on prompt libraries, a user management system, and better RAG (Retrieval Augmented Generation) controls — “features that are becoming more expected with AI chat UI’s at this point…”
“There’s a lot of work cut out for me, but I’m glad to have such a great community to build with!”
Avila acknowledges that AI is now deeply integrated into his own workflow — and that he’s become a heavy user of LibreChat. “I may be biased but I can’t use any other UI…
“This keeps motivating me to improve and ensure it’s up to a high standard.”
Launched less than a year ago, the LibreChat project already has 11,900 stars on GitHub. So when the podcasters asked Avila to make a prediction for the AI ecosystem, he answered without hesitation: an open source future. “It’s the future I want to see, and I feel like a lot of people in tech want to see it too… Where these large language models are getting so good every day, there’s a lot more time and money invested in being able to host these things, just from a consumer-grade computer.” It’s what LibreChat is catering to, “and many similar projects.”
He’s still amazed he can use Llama 3 today — Meta’s own open source large language model — whereas even a year ago it would’ve felt like it was further away.
“And I really think that’s the direction, both on the high level and low level.”
And that accessibility obviously underlies part of the success of LibreChat.
The post Open Source LibreChat Offers More Than Just Extra LLMs appeared first on The New Stack.
LibreChat levels the field for generative AI, allowing users to pick their favorite AI providers, services, and integrations.