Build a Simple Javascript Helper Chatbot Using OpenAI, GPT-3 and Python

Photo by Andy Kelly

In this tutorial, we will be building a Javascript helper chatbot using OpenAIs GPT-3 engine. This chatbot will be capable of answering all your Javascript related queries.

Generative Pre-trained Transformer 3 (GPT-3) is a new language model created by OpenAI that can generate written text of such quality that is often difficult to differentiate from text written by a human.

You need an OpenAI GPT-3 API key with codex engine access for testing out the code in this blog. At the time I’m writing this, OpenAI is running a beta program for GPT-3, and you can signup for the beta program here. After signing up you need to apply for Codex access here. Read the complete get started guide here

OpenAI

In December 2015, Elon Musk with other investors announced the formation of OpenAI. The organization stated it would "freely collaborate" with other institutions and researchers by making its patents and research open to the public. In 2019, OpenAI transitioned from a non-profit to a for-profit organization. The company distributed equity to its employees and partnered with Microsoft Corporation, which announced an investment package of US$1 billion into the company. OpenAI then announced its intention to commercially license its technologies, with Microsoft as its preferred partner. In June 2020, OpenAI announced GPT-3, a language model trained on trillions of words from the Internet. Today we will be using this model in our helper chatbot.

OpenAI Engines

When you sign up for OpenAI beta API, you get an $18 credit and an API key so you can immediately start to work with the API. Before starting, it is important to understand the different engines provided by the API — as they have different capabilities, response times and costs associated with them.

MODELSDESCRIPTION
Base seriesA set of GPT-3 models that can understand and generate natural language
Instruct series(Beta)A set of specialized models that are similar to the base series, but better at following your instructions
Codex series(Private Beta)A set of models that can understand and generate code, including translating natural language to code
Content filterA fine-tuned model that can detect whether text may be sensitive or unsafe

Open AI currently offer two Codex models:

MODELSSTRENGTHREQUEST LENGTH
davinci-codexMost capable Codex model. Particularly good at translating natural language to code.Up to 4,096 tokens(double the usual limit)
cushman-codexAlmost as capable as Davinci Codex, but slightly faster. This speed advantage may make it preferable for real-time applications.Up to 2,048 tokens

In our example we will be using davinci-codex as it is the most efficient model that OpenAI has to offer.

Install OpenAI in Your System

Run the following command from your terminal to install openai in your system. Installing it in a seperate virtual environment is highly recommended.

#!/bin/sh

pip install openai

import Libraries

import os
import openai

Get your OpenAI API key from environment variables (You should export the API key to an environment variable named OPENAI_API_KEY or you can hard code the API key in below code).

openai.api_key = os.getenv("OPENAI_API_KEY")

We will now define custom function that accepts a prompt string and returns response message.

def get_response(prompt):
    return openai.Completion.create(
      engine="davinci-codex",
      prompt=prompt,
      temperature=0,
      max_tokens=180,
      top_p=1.0,
      frequency_penalty=0.5,
      presence_penalty=0.0,
      stop=["Me:"]
    )

The parameters used in the request body are as follows,

ParameterDescription
engineThe ID of the engine to use for this request
promptThe prompt(s) to generate completions for, encoded as a string, array of strings, array of tokens, or array of token arrays.
temperatureWhat sampling temperature to use. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer.
max_tokensThe maximum number of tokens to generate in the completion.
top_pAn alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
frequency_penaltyNumber between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
presence_penaltyNumber between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
stopUp to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.

Now we will define a loop that calls the get_response() method and prints the response after each user input.

print("Type exit for quitting the chatbot")
last_messages = ["Helper Bot: Hai"]
while True:
    user_input = input("Type you message: ")
    print ("\033[A                             \033[A") #Used to replace last user input line from CLI
    print("Me: "+user_input)
    if(user_input == "exit"):
        break;
    last_messages.append("Me: "+user_input)
    prompt = "\n".join(last_messages[-3 if len(last_messages) > 3 else len(last_messages)-1:])
    bot_response = get_response(prompt+"\nHelper Bot:")["choices"][0]["text"]
    last_messages.append("Helper Bot: "+bot_response)
    print(last_messages[-1].rstrip())

That's it! Run the above code and ask your Javascript doubts. Our chatbot will answer your queries. A sample response structure will look like,

openai-javascript-1

Conclusion

Now we have built a Javascript helper chatbot using OpenAI's davinci-codex engine. You can refer to OpenAI's official documentation and experiment with the above code by changing different parameters.

I love your feedback, please let me know what you think.

All the code of this article is available over on Github. This is a python project, so it should be easy to import and run as it is.

Share