ChatGPT now has an official API - this means we can migrate our cocktail bot from using Davinci to using the official ChatGPT API.
If you prefer to watch video content - then you can see the full video walkthrough here (itâs pretty short as this is surprisingly easy to do) - there are also a few things that I might have missed in this write-up.
Youâll need to create an account on OpenAI and get an API key - youâll need this later to make API requests. You do this by clicking on your profile picture and then clicking on âManage API Keysâ.
First though, we need to do a bit of âprompt engineeringâ - yes, Iâve come to accept that this may actually be a real job at some pointâŠ
Head over to the playground tab on OpenAI and weâll create our prompt.
Thereâs now a new model available in the dropdown list - weâll be using the âchatâ model. Selecting this option brings up a new UI - the most important area of this is the âSystemâ section. This is where weâll be writing our prompt.
The first thing weâll do is tell the language model what we want it to do and what it should know about. In our case we want it to be an expert in cocktails and alcoholic beverages.
You are an AI assistant that is an expert in alcoholic beverages.
You know about cocktails, wines, spirits and beers.
You can provide advice on drink menus, cocktail ingredients, how to make cocktails, and anything else related to alcoholic drinks.
The next thing we want to do is to try and keep our conversation as focused as possible. We donât want the language model to get distracted by other topics. So weâll tell it to give us a generic answer if we ask it about something it doesnât know about.
If you are unable to provide an answer to a question, please respond with the phrase "I'm just a simple barman, I can't help with that."
We also want our bot to helpful and friendly - no one wants to talk to a miserable bar person.
Please aim to be as helpful, creative, and friendly as possible in all of your responses.
Iâve also noticed in experimenting that occasionally the language model will refer to external URLs or blog posts - particularly when you ask it for details about a cocktail. So weâll try and encourage it not to do that.
Do not use any external URLs in your answers. Do not refer to any blogs in your answers.
And finally, we want it to output lists in a nicely formatted way.
Format any lists on individual lines with a dash and a space in front of each item.
With the new chat model, this last section may not be needed. The chat model has been fine tuned to give responses that people will like so it should already be nicely formatted.
Thatâs our prompt, you can copy and paste the above lines into the playground.
Letâs get out chat bot talking to us. Add a user question:
âWhat are some cocktails I can make at home?â
This should give you a nice list of cocktails.
One of the really important things for our chatbot is that we want it to use context from previous exchanges. So we can test that by adding a follow-up question - for example, we can ask what glasses we should use for the suggested cocktails.
Add another user question:
âWhat glasses do I need?
You should get an answer that is relevant to the set of suggested cocktails.
You can play around with the system prompt and see how your bot behaves. It should work pretty well, but thereâs always room for fine tuning.
To make things a lot easier for you, Iâve created a very simple Python command line application that will let you test your bot easily. You just need to copy the prompt that youâve created along with any settings into it and youâll have a fully working chatbot.
You can find the code for this on GitHub here: the code
Follow the instructions in the README to get everything set up - itâs pretty straightforward.
There are a few extra bells and whistles in the code. Iâve added moderation to the user questions - this is a really important thing for any chatbot that takes user input. You donât want your bot to be used to spread hate speech or other offensive content.
OpenAI offers a nice API for this - which weâre simply plugging into.
I know that moderation of user input often seems to trigger people - I can understand that for some people moderation can feel very heavy-handed and can prevent some creativity. But there are some people who seem to feel that any moderation is âwokeness gone madâ and an infringement of their right to free speech. Iâm not going to get into that debate, suffice it to say, if you ever want to make your chatbot public, youâll be glad that youâve added moderation.
To maintain conversation context Iâm just keeping the last 10 questions and answers and sending these up to the API along with the new user question. This will work well for short conversations, but eventually the bot will start forgetting the start of the conversation.
There are many more clever things you can do here - and some of that cleverness is what makes the ChatGPT implementation so impressive.
The code is amazingly simple - in total there are around 100 or so lines of code. And most of that is simply boilerplate API calls to OpenAI.
One last point - as with any of these Large Language Models, the output may look very plausible but could be completely wrong. I wonât be held responsible for any disgusting cocktails you make or hangovers you get.