Welcome to our second article in the series, where you will learn how to use the OpenAI models, the GPT models minus the chat in front, available through the API. To do this, we’ll be using Python code throughout. If you have not done so check out our first article here. Some basic familiarity with coding is required. Luckily, we have set you up with some resources to learn all the things you need to know:
Requirements:
- Python is installed on your computer. Check out this article from Tilburg Science Hub for a tutorial on this.
- An IDE, such as VSCode, is installed on your computer. Look at this tutorial from Tilburg Science Hub on how to install it.
We of course recommend following this article, but feel free to skip this article and move on towards the next one. The API offers many more options for prompt engineering and flexibility. You will even use different models interchangeably (don’t be alarmed) and develop a chatbot later on in the series. Additionally, you can save all the code we provide and use it as a template for later use. So, with some folder management, you have all the tools at your fingertips and AI assistance becomes a fill-in-the-blank exercise.
By the end of this article, you will:
- Have obtained your personal API key.
- Understand and be capable of running the OpenAI API in Python.
- Be capable of functionalizing the OpenAI API and run it in an interactive window.
- Implement prompt engineering techniques using the OpenAI API.
Difference between the OpenAI API and ChatGPT
ChatGPT (Desktop Application)
The desktop ChatGPT is an application that allows users to communicate with an AI-powered chatbot to ask questions, perform tasks, or generate content. Its low-setup browser experience is often sufficient for using AI in your studies.
OpenAI API
The OpenAI API enables individuals to access and customize the models used in ChatGPT. If you’re looking to create engineered prompts, the API allows for more flexibility in working with the models.
Setting up your environment
- Create or Log Into Your OpenAI Account:
- Visit OpenAI’s API documentation website and either sign in with an existing account or create a new one. This step is important as it grants you access to the API keys required for authentication.
- Generate an API Key:
- Navigate to the API keys section within your OpenAI account dashboard.
- Create a new API key. Remember to copy this key immediately, as it won’t be visible again once you navigate away from the page.
- Secure Your API Key Using Environmental Variables:
MacOS
- Open your terminal or command prompt.
- Run the command
nano ~/.zshrc
to open your bash profile in the nano editor. (Note: for older MacOS versions, you might need to run nano~/.bash_profile
instead). - You can edit the file by pressing control + O and then enter.
- Add the line
export OPENAI_API_KEY='your-api-key'
at the end of the file, replacingyour-api-key
with the actual key you obtained earlier. - Save the file by pressing
Ctrl + X
,y
and then hitEnter
. - Apply the changes by running
source ~/.zshrc
or restart your terminal.
Windows
- Right-click on
This PC
orMy Computer
on your desktop or in File Explorer, and selectProperties
. - Click on
Advanced system settings
on the left side of the System window. - In the System Properties window, click the
Environment Variables
button near the bottom of the Advanced tab. - In the Environment Variables window, under the
System variables
section, clickNew…
to create a new system variable. - In the New System Variable dialog, enter
OPENAI_API_KEY
as the variable name and your actual API key as the variable value. - Click
OK
to close the New System Variable dialog. Then, clickOK
again to close the Environment Variables window, and clickOK
once more to close the System Properties window.
This video shows setting your API Key as an environmental variable on a Mac.
4. Verifying Your API Key:
- In the terminal, run
echo $OPENAI_API_KEY
. - This command should return your API key, confirming it has been correctly set up as an environmental variable.
5. Install the OpenAI Python library
- Unstall the OpenAI Python package. You can do so by running
pip install --upgrade openai
in your terminal.
Using the OpenAI API in Python
Now that we are all set up, you’ll learn to make your very first request to the OpenAI API!
Step 1: Import Required Packages
Start by opening your IDE and creating a new Python file. At the top of the file, import the necessary packages for interacting with the OpenAI API:
from openai import OpenAI
import os
Step 2: Initialize the OpenAI Client
Suppose we want to ask, “What is prompt engineering?” via the API. To do so, we first set up the client
by calling the OpenAI()
function and setting our api_key
. This is like logging into a service, here is the API, with a username and password.
client = OpenAI(api_key = os.environ.get("OPENAI_API_KEY"))
Step 3: Create a completions request
Once we are “logged in”, we can then call the chat.completions.create()
function on our login (the client
), which is like sending a message to API to get an answer. Inside this completion request, we specify the model and the prompt to send to it.
response = client.completions.create(
model="gpt-4o",
messages = [
{"role": "system",
"content": "You are a Teacher about Artifical Intelligence."},
{"role": "user",
"content": "What is Prompt Engineering?"}
]
)
This function has two important parameters: the model
and the messages
.
The messages
parameter specifies a role (either “system“, “user“, or “assistant“) and content. Typically, a conversation is formatted with a system message first, followed by alternating user and assistant messages.
- The system message helps set the behaviour of the assistant. In other words, it provides instructions to guide the assistant’s behaviour throughout a conversation.
- For example, you can change the personality of the assistant or give specific instructions about how it should behave. This is something we will discuss later in the series.
- Note that the system message is optional, and the assistant’s behaviour without a system message is likely to be similar to using a generic message such as “You are a helpful assistant.” In the example above, it will act as a teacher about Artificial Intelligence.
- The user messages provide requests or comments for the assistant to respond to. These are the prompts that we normally engineer through the desktop version and something we already have engineered in our template example!
- Assistant messages store previous assistant responses.
- It functions as an important factor in prompt engineering as the model responses can be manually specified to give the model, examples of desired answers. This is something we call
one-shot
or, if we add more examples,few-shot prompting
. We explore this skill of shot prompting in a later part of the series.
- It functions as an important factor in prompt engineering as the model responses can be manually specified to give the model, examples of desired answers. This is something we call
The chat.completions.create()
also allows for control
parameters.
- The temperature argument requires a number between zero and two.
- If set to zero, the answer will be predictable, which is good for summarization and transcription.
- If set to two, it will be more random.
- The max_tokens setting controls the desired response length.
Retrieving the text response
print(response.choices[0].message.content)
The print(response.choices[0].message.content)
is outside the scope of this article to explain, just take it as a handy given to get your required output!
Underneath, you will find a basic template, that you can save as a python
(.py) file:
from openai import OpenAI
import os
client = OpenAI(api_key= os.environ.get("OPENAI_API_KEY"))
# Our Question / Prompt
prompt = "What is prompt engineering?"
# Creating a Completion Request
response = client.chat.completions.create(
model = "gpt-4o",
messages = [
{"role": "system",
"content": "You are an Teacher about Artifical Intelligence."},
{"role": "user",
"content": prompt}
],
max_tokens = 50, # Maximum output length
temperature = 1) # Temperature rate of determinism (0 = highly deterministic, 2 = random)
# Extracting the Models Answer
print(response.choices[0].message.content)
If you run this script, you will print out a well-articulated explanation of prompt engineering from the GPT-4o model.
Congratulations! You have successfully created and understood a basic interaction with the OpenAI API!
Become more Efficient
As a “Prompt Engineer,” manually typing the API function whenever you want to use it seems tedious and time-consuming. Fortunately, there is a simple solution: Define your own Python function and use a Jupyter Interactive Python window.
Functionalize the OpenAI API
Python allows users to define functions. By wrapping the completions request inside a function that returns the formatted output of the AI model, we transform the process of making API requests to be efficient. This sets up easy reuse of the request throughout your projects. Here is a template for it:
def ChatGPT_API(prompt):
# Create a request to the chat completions endpoint
response = client.chat.completions.create(
model="gpt-4o",
# Assign the role and content for the message
messages=[{"role": "user", "content": prompt}],
temperature = 0) # Temperature: Controls the randomness of the response (0 is deterministic, 2 is highly random).
return response.choices[0].message.content
# Specify the function with your prompt
response = ChatGPT_API("Specify your prompt here") # Example: "What is Artifical Intelligence?"
print(response)
However, VSCode
doesn’t store variables or functions automatically, and every time you run your code inside a Python file, the whole script will run. Therefore, our function has limited practical use in standard script execution, as you can see in the video below. Luckily, this is where the Jupyter Interactive Window comes in.
Interactive Python Window
Now you will learn how to use interactive Python windows within VSCode
by using Jupyter
. This allows you to execute Python code line by line, where you can immediately see the output and store functions in the environment for later use.
Step 1: Installing the Jupyter Extension in VSCode
- Open VSCode.
- Go to the Extensions sidebar by clicking on the square icon in the sidebar on the left or use the shortcut
Ctrl+Shift+X
on Windows andCommand+Shift+X
on MacOS. - Search for the Jupyter extension and install it. This extension enables interactive notebooks and windows within VSCode.
Step 2: Setting Up Interactive Mode
- Open the Command Palette with
Ctrl+Shift+P
on Windows andCommand+Shift+P
on MacOS. - Type and select Preferences: Open Settings (UI).
- Search for Jupyter: Interactive Window: Text Editor: Execute Selection.
- Enable the option to send selected code in a Python file to the Jupyter interactive window instead of the Python terminal when you press
Shift+Enter
Step 3: How to Run the Interactive Mode
- Open a Python file (e.g.,
script.py
) in VSCode. - Write a piece of Python code. For example, paste the functionalized API code from above.
- Select a part of your code and press
Shift+Enter
. The selected code is sent to the Jupyter interactive window, and the executed code is stored in the environment. This allows you to run code line by line or in blocks, similar to Jupyter Notebooks.
Advantages
- Interaction: You can execute code in pieces and immediately see the results without having to rerun the entire script each time. This is beneficial when:
- When working with long loading times in your script.
- When dealing with lengthy scripts that produce a lot of output.
- Storage of Functions: Functions and variables remain stored in the environment, allowing you to call them later without redefining them.
- Flexibility: Useful if you want to refine your prompt or change parameters.
- For instance, you can experiment with different prompts and temperatures without rerunning the entire script. Once you define your function, you can simply use the function name with your specific prompt.
Now, you only need to specify your model once, and for the rest of the time, you can use the function name. In this case, ChatGPT_API
, with your specific prompt.
API’s Power: F-strings
Some of you might wonder, isn’t it more convenient to use the desktop version? After all, my texts are extremely long and how can I upload files (Key Principle 4). Here is the API solution: Instead of including long inputs within the prompt string, use Python's formatted strings
(or f-strings
).
To use an f-string
, follow these steps:
- Define a variable that holds your input.
- Add an f character before the prompt string.
- Place the variable within curly brackets
{}
and delimited by triple backticks “` inside the prompt string.
This integrates the variable’s content into the prompt. When you print the prompt, it will include the text string.
Example Python Script
from openai import OpenAI
import os
# Set your API key
client = OpenAI(api_key= os.environ.get("OPENAI_API_KEY"))
# Step 1 Define String Variable: In this case we insert code
code = """
income_split <- initial_split(income_frac, prop = 0.80, strata = income)
income_train <- training(income_split)
income_test <- testing(income_split)
"""
# Step 2 and Step 3: Add an f character before the prompt string and place the variable within curly brackets {} inside the prompt string.
prompt = f"""Explain the given code, delimited by triple backticks, step by step.```{code}```"""
# Get the generated response
response = ChatGPT_API(prompt)
print("\n Full Prompt: \n", prompt)
print("\n Generated Answer: \n", response)
OpenAi API’s Output
Full Prompt: Explain the given code, delimited by triple backticks, step by step.```
income_split <- initial_split(income_frac, prop = 0.80, strata = income)
income_train <- training(income_split)
income_test <- testing(income_split) ```
Generated Answer:
### Step-by-Step Explanation
1. **initial_split Function**:
```r
income_split <- initial_split(income_frac, prop = 0.80, strata = income)
```
- **Purpose**: This line is used to split the dataset `income_frac` into two subsets: a training set and a testing set.
- **Parameters**:
- `income_frac`: This is the dataset you want to split.
- `prop = 0.80`: This specifies that 80% of the data should go into the training set, and the remaining 20% should go into the testing set.
....
Conclusion
Congratulations! You are now familiar with the OpenAI API
in Python. You have successfully created your API key
and walked through a prompt call using the API. This code is of course a good starting point, and we recommend thereby saving it as a template. Along the way, we also touch on prompt engineering by specifying distinct parts of the prompt, using f-strings
. So, to end the tutorial:
- Copy and paste the code, add your API key, and save it as a file.
- If you ever need it, grab the template code, add your prompt, and press run code.