Quantcast
Channel: Artificial Intelligence News, Analysis and Resources - The New Stack
Viewing all articles
Browse latest Browse all 317

Getting Started With OpenAI’s GPT Builder, and How It Uses RAG

$
0
0
builder

After being impressed by their first ChatGPT experience, many users may be motivated to try to tailor the experience to their specific domain. We know that retraining an LLM is not a simple task, and I’ve gone over the pros and cons of RAG (retrieval-augmented generation). OpenAI also offers the GPT Builder, a way to create a customized version of ChatGPT. So let’s take a look.

  • Note: This post will be largely code-free, although I assume the reader has a solid understanding of the LLM ecosystem.

A GPT created by OpenAI’s builder is simply referred to as a “GPT,” and you will see the plural term “GPTs” used shortly. Remember that OpenAI is keen to appear to own the concept of the Generative Pre-training Transformer, even though they do not.

Now, the first proviso is that you can’t do this on the free tier offered by OpenAI. If you have access to a corporate account or you are already a GPT-4 user, this is no issue. To keep things level, I normally use standard consumer accounts — hence this warning. Perhaps some of OpenAI’s competitors already provide this functionality on a free tier; but regardless, OpenAI will be the standard they use.

The entry point from the ChatGPT main page is the Explore GPTs tab:

Here are the current trending GPTs that users have chosen to share. I think this section will later morph into more of a marketplace:

We’re going to create our own GPT, and I’ll again use William Shakespeare’s sonnets as my example domain, as they have a known corpus. They are written in older English, which should challenge the LLM slightly, while still existing as a deep well of isolated knowledge. We’ll fashion the chat interface as a tutor to help a student who is studying the sonnets. I realize that making a bot for internal corporate help is more likely to be your professional aim.

Hit the Create button in the top right, and off we go.

When instructing an LLM, focus on the who, how, why and what. We need to specify that the persona for this GPT is a tutor assisting a student with Shakespeare’s sonnets. First, the easy bit: naming.

Well, this is a deeply inventive bit of naming advice. It did also generate a suitable image (remember, DALL·E is also part of the OpenAI ecosystem).

Now for the important bit:

 

Here are my instructions on the left along with a result on the right. The Configure tab holds the current summary that the conversational builder is developing. You can help nudge user questions away from an area, and steering might be particularly relevant for a corporate bot. Note that the starter questions are generated:

To form an example question, I just selected a term from a sonnet, noting its probable intended meaning. I’m no English literature academic, but it looks correct.

This is all fine, so let’s see what happens if I go off-topic.

Of course, it could be that we have new information to offer. I’m going to introduce a “new” sonnet, based on the well-known bear:

How GPT Builder Uses RAG

Let’s turn off web browsing and upload a new sonnet for the LLM to integrate. This is done via RAG, which means that OpenAI does a contextual search inside your data, and passes what it finds to your query prompt as it goes up to the LLM for processing. This effectively allows you to add to what the LLM already knows; however, we will see the limits of that approach shortly.

Note that the context window for a query isn’t infinite — it’s about 32K for GPT-4. Also, the Code Interpreter option needs to be applied for GPT to use additional files.

So how does GPT deal with this gentle attempt to poison it introduce new knowledge?

This is an extremely good answer, but also a warning for more serious uses. When introducing your own information — and again, I’ll assume you are creating an internal corporate knowledge system or similar — anything imported should be consistent with existing information. Whatever you think of an LLM’s capabilities, it does create enough metadata to comprehend how it knows information.

The final way of introducing information is through actions, which are effectively REST calls to outside services. In the example given, a schema is specified for a weather service. You can make third-party APIs available by providing details about the endpoints and parameters as well as a description about how the model should be used. The schema looks self-explanatory. In the example, a GetCurrentWeather request is defined against the /location endpoint:

You can find out more about the format on OpenAI’s GitHub.

Conclusion

This functionality is clearly still in development and comes with restrictions. Making money from a GPT will be done through a marketplace, which is currently in testing. Any built GPTs will only be accessible within the OpenAI website — and, as mentioned, this is not available when using the free tier.

There is already a large range of available examples, so you may find that you don’t even have to create one yourself.

So is this a code-free custom LLM? GPT Builder does give you a way to create a GPT that follows persona instructions and accepts RAG-based data. But while I did need to understand the workings of an LLM, I didn’t write any code — apart from the schema code if an action is written. So this is certainly a simpler way of controlling and maintaining a specific LLM chat interface, and probably a good starting place.

The post Getting Started With OpenAI’s GPT Builder, and How It Uses RAG appeared first on The New Stack.

We explore OpenAI's GPT Builder, which gives you a way to create a GPT that follows persona instructions and uses RAG-based data.

Viewing all articles
Browse latest Browse all 317

Trending Articles