On their websites or apps, many companies like LG or Honda have a chatBot. These chatBots can answer users’ questions in a fun and easy way. Implementing such a thing can seem complicated at first but, with OpenAI’s latest announcements, it has never been easier.
On the sixth of November, during their first ever DevDay, OpenAI announced many interesting things such as a new model : GPT-4-turbo, the access to personalized GPTs on their website, or, what is going to interest us today, the possibility to create what is called Assistants. These Assistants are tailored chat interfaces that can be integrated into applications via an API. They are capable of:
- Executing functions within your code to seamlessly interact with your application.
- Writing and running code thanks to OpenAi’s code interpreter.
- Using provided files as context to answer the user’s questions
For instance, the AI team at BAM is developing a VS Code extension and I want a chatBot to be able to explain what this extension does and how to use it. Therefore, I created an assistant and I gave it the documentation of the extension in a PDF file. This is an extract of a chat I had with this assistant :
These assistants are very easy to set up and can be used for many things such as :
In the second part of this article I will guide you through setting up your bot, discuss the performance and pricing options, and help you determine the optimal chatbot solution for your specific requirements.
To start with OpenAi’s assistants, you first need to create an openAI account that will be used for the configuration and the billing of the bot. Then go to OpenAI’s Assistants board. And finally, create a new Assistant to start filling the configuration panel (see below).
The instructions section of the configuration is important for the branding of your app. You might need the assistant to be very polite or to be fun. You might need the assistant to be descriptive or to be concise. You can specify all this by setting :
If you have a few ChatGPT good practices in mind, you can use them here: it is the same model that is used in both cases. You can find a few good tips here, most of them are quite code-oriented but you can generalize them to many use cases.
Finally, if you're unsure about this section, the GPT builder can assist. It helps you craft a customized chatbot for use within OpenAI's interface. You can then paste the content from the "Instructions" section created by the builder in your Assistant’s configuration.
When choosing the model you are going to use, 3 elements have to be considered :
To sum up, GPT-3.5 can then be used if you need short, simple informations, often, and for cheaper while GPT-4-turbo is better if you have a lot of context, want precise answers, and can afford to put more money in the bot.
For your assistant to be able to do many different things, you have to configure tools.
Two other tools are available:
More information about these tools can be found in the documentation.
After setting up your Assistant, you can test its capabilities in the playground.
Once you are satisfied with the results, you can add your brand new assistant to your app.
If you can’t obtain the quality of answers you want with this tool, you can find other ways to integrate AI in your products here.
To make your assistant available in your app, you first need to add the openai library to your dependencies. (I used typescript in my example but everything is also available in python, it is described in details here)
Then, you need to setup your openAI API key in your environment variables or in the function variables and create an openAI instance. This instance will help you communicate easily with the OpenAI api.
Once this is ready, you can start a discussion with your assistant. To do so, you need to create a thread. A thread represents your conversation, this is what enables the assistant to keep the history of your exchanges. All the messages are stored in it. To create one, use :
Once your thread is created, you can add a message to it and then launch what is called a run to indicate to openai that you want to retrieve an answer from your assistant
Currently, live visualization of the assistant's response generation isn't possible. To know when your answer has been generated, you need to check periodically the status of your run until it is completed (it usually takes between 5 and 20 seconds)
Once the run is completed you can retrieve your thread.
All in all, once your Thread is created, you can use this hook to handle your conversation :
After retrieving the answer, you can add another message, run the thread again, get an answer, add another message…. over and over and over… et voilà! You have had a nice chat with your assistant. With only a nice design left to implement, you have created a personalized chatbot in a record-breaking amount of time.
In summary, OpenAI's new Assistants simplifies the process of integrating a sophisticated chatbot into your application.
Using these assistants to answer your users’ questions is already a significant step forward. However, I think that OpenAI’s assistants can do much more. Looking ahead, I plan to take a look at how the Functions feature can further elevate your app's performance. This will involve exploring ways to perform in-app actions based on user requests such as adjusting settings, scheduling meetings, or assisting in travel and accommodation arrangements. Stay tuned for more insights on making use of the full potential of AI to enhance user experience and app functionality.